- Tony's Take On Tech
- Posts
- Updates In AI, Part 1
Updates In AI, Part 1
Lots has happened!

You Can Listen
You can LISTEN to my newsletter! Go to https://go.ttot.link/AIUpdatesPart1 and you’ll be able to listen to it. Let me know if you like this feature or not and whether I should continue it. No, that’s not me reading it.
Let’s Get Into It!
There have been LOTS of updates to the various Artificial Intelligence engines and services since we last discussed them. So many, in fact, that I’m going to limit this discussion to just a few of the more popular artificial intelligence services: ChatGPT by OpenAI, Gemini by Google, Claude by Anthropic, and Perplexity by the company of the same name.
There’s so much to discuss that this is part 1 of 2 parts. We’ll discuss OpenAI and Google this time and Anthropic and Perplexity next time.
OpenAI
OpenAI has brought new, improved models to ChatGPT.com. The most recent, and the default, is GPT-5. It increased the context window (which is really how much ChatGPT can remember during a conversation) to 400,000 tokens, or roughly 350,000 words which is the length of a very long novel - for comparison, “Gone with the Wind” by Margaret Mitchell is 418,053 words) OpenAI also concentrated on reducing hallucinations, where the AI basically “lies” to you. They provided multi-modal input and output, which means you can give it pictures, audio, and video and it can respond in those formats. If you divide your conversations into “projects,” ChatGPT can remember and use all of the conversations you’ve had in that project. They also provided access to “deep research” for free and paid users, but free users get a less capable version. Paid users get transcriptions for meetings, as well as connectors to services like GMail, Google Calendar and other external tools (see https://mashable.com/article/chatgpt-5-openai-gmail-calendar and https://chatgpt.com/features/connectors).
Sora, while not a part of ChatGPT, is still worth mentioning. It’s OpenAI’s video generator, and it was upgraded to Sora 2. It is now able to generate much more realistic 10- or 15-second videos from typed or spoken instructions, and I have to admit, I find it quite impressive. You can check out a few of the videos I had it generate for me at https://go.ttot.link/TonysSora. Sora is available at Sora.com.
Gemini, Google’s AI, is at https://gemini.google.com. Its large language model was recently updated to 2.5 Pro which brought a much larger context window of up to one million tokens. For comparison, Leo Tolstoy’s “War and Peace” is only about 750,000 tokens and the entire Harry Potter book series is only about 700,000 tokens. They also introduced 2.5 Flash which is a smaller, faster version of the 2.5 Pro model. Google also provides a “deep research” facility similar to that offered by OpenAI.
Gemini Live, a feature on Apple and Android devices, can now, with user approval, access the camera and can read the screen, so Gemini can be engaged to answer questions about and analyze what’s on the user’s screen or in the user’s environment. It can also interact with Google Maps, Google Calendar, and other Google items. See https://blog.google/technology/ai/google-ai-updates-october-2025/ for more details about the updates and https://gemini.google/overview/gemini-live/ for more info on Gemini Live.
Nano banana is Google’s image-editing model that was announced and made a part of Gemini in August of 2025 (see https://www.nano-banana.ai/ for more info and to try it out). Besides being a part of Gemini it is also a part of Google Lens (see https://lens.google/), NotebookLM, and other Google products. NotebookLM has been discussed in previous newsletters and is also referenced below, in my wrap-up. Nano banana 2 is expected to be delivered this month: November, 2025, and is hoped to bring with it 4K resolution as well as faster generation and edits.
Veo is Google’s AI-powered video generator. Version 3.1 was introduced in October of 2025 and includes better audio, higher resolution video optimized for platforms like YouTube, and can stitch together shorter generated videos to create videos of a minute or more. It utilizes 3D modeling to better render 2D movements and views, and users can “steer” the virtual camera like a real video shoot. Those who pay $20/month only get 3 videos per day. The more advanced features are only available to those who pay $200/month. More info about Veo 3 is available at https://deepmind.google/models/veo/.
That’s all for this time
I hope you’ll take some time to investigate some of the updates I’ve discussed and don’t miss my next update as we’ll cover Claude by Anthropic and Perplexity by Perplexity. Don't hesitate to write to me if you have questions!
As always, my intent is to help you understand the basics and equip you to search for more detailed information.
Please feel free to email me with questions, comments, suggestions, requests for future columns, to sign up for my newsletter, or whatever at [email protected] or just drop me a quick note and say HI!
And remember that I maintain a NotebookLM notebook of all my previous newsletters at https://go.ttot.link/TonysNotebook. It has access to all of my newsletters. You can ask it questions like “what are passkeys” or “what can I do to help me remember things.” You’ll need a Google account to access it and, when you visit, you’ll be given your own NotebookLM notebook.
Newsletter
If you like, you can read my most recent newsletter in the Hillsboro Times Gazette at https://go.ttot.link/TG-Column - I should have that link updated shortly after this edition of the newsletter appears in the online version of the newspaper.