Sam Altman

AI is cool i guess

Sam Altman
Nov 14, 4:48 AM
Small-but-happy win: If you tell ChatGPT not to use em-dashes in your custom instructions, it finally does what it's supposed to do!
Sam Altman
Nov 13, 7:11 PM
GPT-5.1 is now available in the API. Pricing is the same as GPT-5. We are also releasing gpt-5.1-codex and gpt-5.1-codex-mini in the API, specialized for long-running coding tasks. Prompt caching now lasts up to 24 hours! Updated evals in our blog post.
Sam Altman
Nov 13, 7:11 PM
Understanding neural networks through sparse circuits:
Sam Altman
Nov 12, 8:14 PM
RT @jasonkwon: We’re fighting this overreach on user privacy. As @sama has mentioned before, we need a new form of privilege - AI privileg…
Sam Altman
Nov 12, 7:35 PM
GPT-5.1 is out! It's a nice upgrade. I particularly like the improvements in instruction following, and the adaptive thinking. The intelligence and style improvements are good too.
Sam Altman
Nov 10, 9:59 PM
RT @gdb: Welcome @sk7037 to OpenAI! Incredibly excited to work with him on designing and building our compute infrastructure, which will p…
Sam Altman
Nov 8, 6:55 PM
This is an important one, I think. AI progress and recommendations: https://t.co/Zy6J6bxYBI
Sam Altman
Nov 7, 10:05 PM
The government has played a role in critical infrastructure builds. Our public submission (posted on our blog) shares our thinking and suggests ideas for how the US government can support domestic supply chain/manufacturing. This is very in line with everything we have heard from the government about their priorities. We think US reindustrialization across the entire stack--fabs, turbines, transformers, steel, and much more--will help everyone in our industry, and other industries (including us). To the degree the government wants to do something to help ensure a domestic supply chain, great. This is part of a national policy that makes sense to me. But that's super different than loan guarantees to OpenAI, and we hope that's clear. It would be good for the whole country, many industries, and all players in those industries.
Sam Altman
Nov 6, 7:21 PM
I would like to clarify a few things. First, the obvious one: we do not have or want government guarantees for OpenAI datacenters. We believe that governments should not pick winners or losers, and that taxpayers should not bail out companies that make bad business decisions or otherwise lose in the market. If one company fails, other companies will do good work. What we do think might make sense is governments building (and owning) their own AI infrastructure, but then the upside of that should flow to the government as well. We can imagine a world where governments decide to offtake a lot of computing power and get to decide how to use it, and it may make sense to provide lower cost of capital to do so. Building a strategic national reserve of computing power makes a lot of sense. But this should be for the government’s benefit, not the benefit of private companies. The one area where we have discussed loan guarantees is as part of supporting the buildout of semiconductor fabs in the US, where we and other companies have responded to the government’s call and where we would be happy to help (though we did not formally apply). The basic idea there has been ensuring that the sourcing of the chip supply chain is as American as possible in order to bring jobs and industrialization back to the US, and to enhance the strategic position of the US with an independent supply chain, for the benefit of all American companies. This is of course different from governments guaranteeing private-benefit datacenter buildouts. There are at least 3 “questions behind the question” here that are understandably causing concern. First, “How is OpenAI going to pay for all this infrastructure it is signing up for?” We expect to end this year above $20 billion in annualized revenue run rate and grow to hundreds of billion by 2030. We are looking at commitments of about $1.4 trillion over the next 8 years. Obviously this requires continued revenue growth, and each doubling is a lot of work! But we are feeling good about our prospects there; we are quite excited about our upcoming enterprise offering for example, and there are categories like new consumer devices and robotics that we also expect to be very significant. But there are also new categories we have a hard time putting specifics on like AI that can do scientific discovery, which we will touch on later. We are also looking at ways to more directly sell compute capacity to other companies (and people); we are pretty sure the world is going to need a lot of “AI cloud”, and we are excited to offer this. We may also raise more equity or debt capital in the future. But everything we currently see suggests that the world is going to need a great deal more computing power than what we are already planning for. Second, “Is OpenAI trying to become too big to fail, and should the government pick winners and losers?” Our answer on this is an unequivocal no. If we screw up and can’t fix it, we should fail, and other companies will continue on doing good work and servicing customers. That’s how capitalism works and the ecosystem and economy would be fine. We plan to be a wildly successful company, but if we get it wrong, that’s on us. Our CFO talked about government financing yesterday, and then later clarified her point underscoring that she could have phrased things more clearly. As mentioned above, we think that the US government should have a national strategy for its own AI infrastructure. Tyler Cowen asked me a few weeks ago about the federal government becoming the insurer of last resort for AI, in the sense of risks (like nuclear power) not about overbuild. I said “I do think the government ends up as the insurer of last resort, but I think I mean that in a different way than you mean that, and I don’t expect them to actually be writing the policies in the way that maybe they do for nuclear”. Again, this was in a totally different context than datacenter buildout, and not about bailing out a company. What we were talking about is something going catastrophically wrong—say, a rogue actor using an AI to coordinate a large-scale cyberattack that disrupts critical infrastructure—and how intentional misuse of AI could cause harm at a scale that only the government could deal with. I do not think the government should be writing insurance policies for AI companies. Third, “Why do you need to spend so much now, instead of growing more slowly?”. We are trying to build the infrastructure for a future economy powered by AI, and given everything we see on the horizon in our research program, this is the time to invest to be really scaling up our technology. Massive infrastructure projects take quite awhile to build, so we have to start now. Based on the trends we are seeing of how people are using AI and how much of it they would like to use, we believe the risk to OpenAI of not having enough computing power is more significant and more likely than the risk of having too much. Even today, we and others have to rate limit our products and not offer new features and models because we face such a severe compute constraint. In a world where AI can make important scientific breakthroughs but at the cost of tremendous amounts of computing power, we want to be ready to meet that moment. And we no longer think it’s in the distant future. Our mission requires us to do what we can to not wait many more years to apply AI to hard problems, like contributing to curing deadly diseases, and to bring the benefits of AGI to people as soon as possible. Also, we want a world of abundant and cheap AI. We expect massive demand for this technology, and for it to improve people’s lives in many ways. It is a great privilege to get to be in the arena, and to have the conviction to take a run at building infrastructure at such scale for something so important. This is the bet we are making, and given our vantage point, we feel good about it. But we of course could be wrong, and the market—not the government—will deal with it if we are.
Sam Altman
Nov 5, 10:49 PM
A thing often in common among great startup investors, founders, and researchers: Trading making a lot of small mistakes in exchange for getting a few giant wins. (Surprisingly many people seem to prefer a few big mistakes in exchange for a lot of small wins.)
Sam Altman
Nov 4, 10:48 PM
interesting post from @boazbaraktcs: https://t.co/3SXF6mvV8h
Sam Altman
Nov 4, 8:59 PM
Codex has transformed how OpenAI builds over the last few months. Have some great upcoming models too. Amazing work by the team!
Sam Altman
Nov 3, 7:36 PM
Very pleased to be working with Amazon to bring a lot more NVIDIA chips online for OpenAI to keep scaling!
Sam Altman
Nov 2, 7:28 PM
i helped turn the thing you left for dead into what should be the largest non-profit ever. you know as well as anyone a structure like what openai has now is required to make that happen.
Sam Altman
Oct 30, 10:32 PM
GPT-6 will be renamed GPT-6-7, you're welcome
Sam Altman
Oct 30, 10:24 PM
A tale in three acts: https://t.co/ClRZBgT24g
Sam Altman
Oct 30, 9:47 PM
RT @billpeeb: we are launching the ability to buy extra gens in sora today. we are doing this for two main reasons: first, we have been qu…
Sam Altman
Oct 30, 9:00 PM
A new security agent called Aardvark:
Sam Altman
Oct 30, 8:56 PM
All palaces are temporary palaces All theories are provisional theories
Sam Altman
Oct 30, 8:43 PM
If you want to use more Codex after you hit your subscription limits, you can now buy credits as needed. This is something we expect to do for compute-intensive features; it will let us keep subscription prices low for most users and let the rest of you go wild.
Sam Altman
Oct 30, 2:54 PM
I love the way Tibo and team are so methodically going through every inch of the system to try to track down user feedback.
Sam Altman
Oct 29, 5:19 PM
Yesterday we did a livestream. TL;DR: We have set internal goals of having an automated AI research intern by September of 2026 running on hundreds of thousands of GPUs, and a true automated AI researcher by March of 2028. We may totally fail at this goal, but given the extraordinary potential impacts we think it is in the public interest to be transparent about this. We have a safety strategy that relies on 5 layers: Value alignment, Goal alignment, Reliability, Adversarial robustness, and System safety. Chain-of-thought faithfulness is a tool we are particularly excited about, but it somewhat fragile and requires drawing a boundary and a clear abstraction. On the product side, we are trying to move towards a true platform, where people and companies building on top of our offerings will capture most of the value. Today people can build on our API and apps in ChatGPT; eventually, we want to offer an AI cloud that enables huge businesses. We have currently committed to about 30 gigawatts of compute, with a total cost of ownership over the years of about $1.4 trillion. We are comfortable with this given what we see on the horizon for model capability growth and revenue growth. We would like to do more—we would like to build an AI factory that can make 1 gigawatt per week of new capacity, at a greatly reduced cost relative to today—but that will require more confidence in future models, revenue, and technological/financial innovation. Our new structure is much simpler than our old one. We have a non-profit called OpenAI Foundation that governs a Public Benefit Corporation called OpenAI Group. The foundation initially owns 26% of the PBC, but it can increase with warrants over time if the PBC does super well. The PBC can attract the resources needed to achieve the mission. Our mission, for both our non-profit and PBC, remains the same: to ensure that artificial general intelligence benefits all of humanity. The nonprofit is initially committing $25 billion to health and curing disease, and AI resilience (all of the things that could help society have a successful transition to a post-AGI world, including technical safety but also things like economic impact, cyber security, and much more). The nonprofit now has the ability to actually deploy capital relatively quickly, unlike before. In 2026 we expect that our AI systems may be able to make small new discoveries; in 2028 we could be looking at big ones. This is a really big deal; we think that science, and the institutions that let us widely distribute the fruits of science, are the most important ways that quality of life improves over time.
Sam Altman
Oct 28, 5:23 PM
California is my home, and I love it here, and when I talked to Attorney General Bonta two weeks ago I made clear that we were not going to do what those other companies do and threaten to leave if sued. We really wanted to figure this out and are really happy about where it all landed — and very much appreciate the work of the Attorney General!
Sam Altman
Oct 28, 2:43 PM
Jakub and I are going to do a livestream and answer questions today at 10:30 am pacific. We have a lot of things to talk about--of course we will cover our new corporate structure, but we will also discuss our new goals for research, the evolution of our product offerings, an update on our infrastructure buildout, the initial funding areas for the nonprofit, and more. It is probably the most important stuff we have to say this year. TL;DR on the structure--the non-profit remains in control and, if we do our jobs well, will be the best-resourced non-profit ever. We are excited to get to work immediately deploying the capital. Our LLC becomes a PBC. I am grateful to the Delaware and California AGs, our partners at Microsoft, all our investors, and especially to our tireless team for their work in getting to a good place here.
Sam Altman
Oct 27, 6:22 PM
RT @JoHeidecke: 🧵Today we’re sharing more details about improvements of the default GPT-5 model in responding to sensitive conversations ar…
Sam Altman
Oct 27, 5:13 PM
RT @lulumeservey: It is Fidji Simo.
Sam Altman
Oct 24, 4:55 PM
RT @jasonkwon: One of the greatest opportunities in AI safety and security is the chance to help support the creation of new industry verti…
Sam Altman
Oct 24, 4:22 PM
ok gl
Sam Altman
Oct 23, 10:55 PM
Ari and team are super talented and we can't wait Sky x ChatGPT!
Sam Altman
Oct 22, 10:52 PM
RT @billpeeb: sora roadmap update: in the spirit of building this app openly, here's what we're landing soon. first, more creation tools.…
Sam Altman
Oct 21, 11:21 PM
RT @bengoodger: When I joined @OpenAI last year all I had was an idea: that putting the world’s best AI assistant at the heart of your bro…
Sam Altman
Oct 21, 5:40 PM
Our new AI-first web browser, ChatGPT Atlas, is here for macOS. Please send feedback! Availability on other platforms to follow.
Sam Altman
Oct 21, 3:13 PM
10 am livestream today to launch a new product I'm quite excited about!
Sam Altman
Oct 21, 3:58 AM
RT @SebastienBubeck: My posts last week created a lot of unnecessary confusion*, so today I would like to do a deep dive on one example to…
Sam Altman
Oct 15, 7:11 PM
Ok this tweet about upcoming changes to ChatGPT blew up on the erotica point much more than I thought it was going to! It was meant to be just one example of us allowing more user freedom for adults. Here is an effort to better communicate it: As we have said earlier, we are making a decision to prioritize safety over privacy and freedom for teenagers. And we are not loosening any policies related to mental health. This is a new and powerful technology, and we believe minors need significant protection. We also care very much about the principle of treating adult users like adults. As AI becomes more important in people's lives, allowing a lot of freedom for people to use AI in the ways that they want is an important part of our mission. It doesn't apply across the board of course: for example, we will still not allow things that cause harm to others, and we will treat users who are having mental health crises very different from users who are not. Without being paternalistic we will attempt to help users achieve their long-term goals. But we are not the elected moral police of the world. In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here.
Sam Altman
Oct 15, 3:23 AM
Things have come a long way since the delivery of the DGX-1 9 years ago; amazing to see...
Sam Altman
Oct 14, 4:02 PM
We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases. In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!). If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing). In December, as we roll out age-gating more fully and as part of our “treat adult users like adults” principle, we will allow even more, like erotica for verified adults.
Sam Altman
Oct 14, 3:51 PM
RT @bradlightcap: welcome @Walmart to instant checkout 🤝
Sam Altman
Oct 12, 1:00 AM
Codex is so good, and is going to get so amazing. I am having a hard time imagining what creating software at the end of 2026 is going to look like.
Sam Altman
Oct 10, 11:58 PM
One of the most fun parts of OpenAI is watching people here level up so fast and do such excellent work. We are operating at a high level across many different disciplines and many of the people doing it have never done it before, and joined us at the beginning of their career. If you believe in people and give them a lot of responsibility and support (and pick the right people to bet on) you will be surprised on the upside more often than you think. I would love to see more companies operate this way and think we would all benefit. (This was also one of the most fun parts of startup investing.)
Sam Altman
Oct 6, 1:04 PM
Excited to partner with AMD to use their chips to serve our users! This is all incremental to our work with NVIDIA (and we plan to increase our NVIDIA purchasing over time). The world needs much more compute...
Sam Altman
Oct 5, 11:28 PM
excited for dev day tomorrow! got some new stuff to help you build with AI.
Sam Altman
Oct 5, 11:27 PM
RT @billpeeb: sora update: cameo and safety improvements inbound! 1. cameo restrictions: we've heard from lots of folks who want to make t…
Sam Altman
Oct 4, 12:38 AM
Sora update #1: https://t.co/DC9ZpR7cSC
Sam Altman
Oct 3, 6:35 AM
RT @SebastienBubeck: Well, this time it's by Terence Tao himself: https://t.co/hFuWFLvoTC https://t.co/F3zRYnYJVE
Sam Altman
Oct 1, 1:36 PM
i get the vibe here, but... we do mostly need the capital for build AI that can do science, and for sure we are focused on AGI with almost all of our research effort. it is also nice to show people cool new tech/products along the way, make them smile, and hopefully make some money given all that compute need. when we launched chatgpt there was a lot of "who needs this and where is AGI". reality is nuanced when it comes to optimal trajectories for a company.
Sam Altman
Oct 1, 1:21 PM
it is way less strange to watch a feed full of memes of yourself than i thought it would be. not sure what to make of this.
Sam Altman
Oct 1, 2:41 AM
does feel like this is really starting to happen (in tiny ways)
Sam Altman
Oct 1, 2:38 AM
amazing breakthroughs from @model_mechanic again and again and i have no doubt the best ones are coming soon :)
Sam Altman
Sep 30, 5:40 PM
lol gj gabriel
Sam Altman
Sep 30, 5:32 PM
thanks bill for your leadership and vision on this project; it has been awesome to watch.
Sam Altman
Sep 30, 5:15 PM
congrats, liam!
Sam Altman
Sep 30, 5:14 PM
We are launching a new app called Sora. This is a combination of a new model called Sora 2, and a new product that makes it easy to create, share, and view videos. This feels to many of us like the “ChatGPT for creativity” moment, and it feels fun and new. There is something great about making it really easy and fast to go from idea to result, and the new social dynamics that emerge. Creativity could be about to go through a Cambrian explosion, and along with it, the quality of art and entertainment can drastically increase. Even in the very early days of playing with Sora, it’s been striking to many of us how open the playing field suddenly feels. In particular, the ability to put yourself and your friends into a video—the team worked very hard on character consistency—with the cameo feature is something we have really enjoyed during testing, and is to many of us a surprisingly compelling new way to connect. We also feel some trepidation. Social media has had some good effects on the world, but it’s also had some bad ones. We are aware of how addictive a service like this could become, and we can imagine many ways it could be used for bullying. It is easy to imagine the degenerate case of AI video generation that ends up with us all being sucked into an RL-optimized slop feed. The team has put great care and thought into trying to figure out how to make a delightful product that doesn’t fall into that trap, and has come up with a number of promising ideas. We will experiment in the early days of the product with different approaches. In addition to the mitigations we have already put in place (which include things like mitigations to prevent someone from misusing someone’s likeness in deepfakes, safeguards for disturbing or illegal content, periodic checks on how Sora is impacting users’ mood and wellbeing, and more) we are sure we will discover new things we need to do if Sora becomes very successful. To help guide us towards more of the good and less of the bad, here are some principles we have for this product: *Optimize for long-term user satisfaction. The majority of users, looking back on the past 6 months, should feel that their life is better for using Sora that it would have been if they hadn’t. If that’s not the case, we will make significant changes (and if we can’t fix it, we would discontinue offering the service). *Encourage users to control their feed. You should be able to tell Sora what you want—do you want to see videos that will make you more relaxed, or more energized? Or only videos that fit a specific interest? Or only for a certain about of time? Eventually as our technology progresses, you will be should to the tell Sora what you want in detail in natural language. (However, parental controls for teens include the ability to opt out of a personalized feed, and other things like turning off DMs.) *Prioritize creation. We want to make it easy and rewarding for everyone to participate in the creation process; we believe people are natural-born creators, and creating is important to our satisfaction. *Help users achieve their long-term goals. We want to understand a user’s true goals, and help them achieve them. If you want to be more connected to your friends, we will try to help you with that. If you want to get fit, we can show you fitness content that will motivate you. If you want to start a business, we want to help teach you the skills you need. And if you truly just want to doom scroll and be angry, then ok, we’ll help you with that (although we want users to spend time using the app if they think it’s time well spent, we don’t want to be paternalistic about what that means to them).
Sam Altman
Sep 30, 5:09 PM
Excited to launch Sora 2! Video models have come a long way; this is a tremendous research achievement. Sora is also the most fun I've had with a new product in a long time. The iOS app is available in the App Store in the US and Canada; we will expand quickly.
Sam Altman
Sep 30, 5:08 PM
lol
Sam Altman
Sep 30, 11:55 AM
I mostly buy stuff from ChatGPT now, so I am excited for this new feature!
Sam Altman
Sep 30, 11:51 AM
RT @OpenAI: Introducing parental controls in ChatGPT. Now parents and teens can link accounts to automatically get stronger safeguards for…
Sam Altman
Sep 26, 4:35 AM
Had fun being in Germany to launch a sovereign cloud offering with SAP and Microsoft; important to us to help governments use our frontier models.
Sam Altman
Sep 25, 8:50 PM
very important work on a new eval
Sam Altman
Sep 25, 7:36 PM
Today we are launching my favorite feature of ChatGPT so far, called Pulse. It is initially available to Pro subscribers. Pulse works for you overnight, and keeps thinking about your interests, your connected data, your recent chats, and more. Every morning, you get a custom-generated set of stuff you might be interested in. It performs super well if you tell ChatGPT more about what's important to you. In regular chat, you could mention “I’d like to go visit Bora Bora someday” or “My kid is 6 months old and I’m interested in developmental milestones” and in the future you might get useful updates. Think of treating ChatGPT like a super-competent personal assistant: sometimes you ask for things you need in the moment, but if you share general preferences, it will do a good job for you proactively. This also points to what I believe is the future of ChatGPT: a shift from being all reactive to being significantly proactive, and extremely personalized. This is an early look, and right now only available to Pro subscribers. We will work hard to improve the quality over time and to find a way to bring it to Plus subscribers too. Huge congrats to @ChristinaHartW, @_samirism, and the team for building this.
Sam Altman
Sep 25, 7:32 PM
RT @fidjissimo: AI should do more than just answer questions; it should anticipate your needs and help you reach your goals. That’s what we…
Sam Altman
Sep 25, 9:04 AM
RT @SebastienBubeck: It's becoming increasingly clear that gpt5 can solve MINOR open math problems, those that would require a day/few days…
Sam Altman
Sep 24, 11:29 AM
Progress at our datacenter in Abilene. Fun to visit yesterday! https://t.co/W22ssjWstW