Mira Murati

Mira Murati (born 1988) is an Albanian engineer and business executive who has been serving as chief technology officer of OpenAI since 2018.

Quotes

 * We're working on something that will change everything. Will change the way that we work, the way that we interact with each other, and the way that we think and everything, really, all aspects of life.
 * Behind the Tech with Kevin Scott, Microsoft podcast (July 2023)


 * As with other revolutions that we've gone through, there will be new jobs and some jobs will be lost
 * As quoted in ''The Creator of Chat GPT Thinks AI Should Be Regulated (February 5, 2023)


 * Journalist: There's always a fear that government involvement can slow innovation. You don't think it's too early for policymakers and regulators to get involved?
 * Murati: It's not too early. Everyone needs to start getting involved, given the impact these technologies are going to have.
 * As quoted in ''The Creator of Chat GPT Thinks AI Should Be Regulated (February 5, 2023)


 * Artificial intelligence (AI) can be misused, or it can be used by bad actors. So then, there are questions about how you govern the use of this technology globally. How do you govern the use of AI in a way that is aligned with human values?
 * As quoted in What is the Indian connection of Mira Murati, CTO of Chat GPT creator OpenAI? (February 10, 2023)


 * Journalist: Let's take a step back: There's so much interest not just in the product but the people making this all happen. What do you think are the most formative experiences you've had that have shaped you and who you are today?
 * Murati: Certainly growing up in Albania. But also, I started in aerospace, and my time at Tesla was certainly a very formative moment—going through the whole experience of design and deployment of a whole vehicle. And definitely coming to OpenAI. Going from just 40 or 50 of us when I joined and we were essentially a research lab, and now we're a full-blown product company with millions of users and a ton of technologists. [OpenAI now has about 500 employees.]
 * As quoted in Mira Murati, the young CTO of OpenAI, is building Chat GPT and shaping your future (October 5, 2023)


 * Journalist: Will GPT-5 solve the hallucination problem?
 * Murati: Well, I mean maybe. Let's see. We've made a ton of progress on the hallucination issue with GPT-4, but we're not where we need to be. But we're sort of on the right track. And it's unknown, it's research. It could be that continuing in this path of reinforcement learning with human feedback, we can get to reliable outputs. And we're also adding other elements like retrieval and search. So you can provide more factual answers or get more factual outputs from the model. So there's a combination of technologies that we're putting together to kind of reduce the hallucination issue.
 * As quoted in A Conversation with OpenAI's Sam Altman and Mira Murati (October 20, 2023)


 * It started with math. When I was a kid, I just gravitated toward math. I would do problem sets all the time and then eventually did Olympiads and I loved doing that. It was such a passion.
 * As quoted in The secrets behind the success of Mira Murati – blog – finally revealed (September 18, 2023)


 * Journalist: Is there a path between products like GPT-4 and AGI?
 * Murati: We're far from the point of having a safe, reliable, aligned AGI [Artificial General Intelligence ]system. Our path to getting there has a couple of important vectors. From a research standpoint, we're trying to build systems that have a robust understanding of the world similar to how we do as humans. Systems like GPT-3 initially were trained only on text data, but our world is not only made of text, so we have images as well and then we started introducing other modalities. The other angle has been scaling these systems to increase their generality. With GPT-4, we're dealing with a much more capable system, specifically from the angle of reasoning about things. This capability is key. If the model is smart enough to understand an ambiguous direction or a high-level direction, then you can figure out how to make it follow this direction. But if it doesn't even understand that high-level goal or high-level direction, it's much harder to align it. It's not enough to build this technology in a vacuum in a lab. We need this contact with reality, with the real world, to see where are the weaknesses, and where are the breakage points, and try to do so in a way that's controlled and low risk and get as much feedback as possible.
 * As quoted in Insider Q&A: OpenAI CTO Mira Murati on shepherding Chat GPT (April 24, 2023)