Friday, May 24, 2024
HomeTechnologyGoogle does not want to be left behind ChatGPT and reinforces its...

Google does not want to be left behind ChatGPT and reinforces its Artificial Intelligence

This week is the most important week of the year for Artificial Intelligence (AI). In Mountain View (California), the heart of Silicon Valley, this Tuesday Google gave a boost to its AI software and services.

In its imposing Shoreline Amphitheater, it showed its new features to hundreds of developers from around the world in a new edition of its annual Google I/O meeting. And he did it just 24 hours after ChatGPT launched its 4th versionin which he showed how his powerful AI can chat with a human being naturally.

On a cool and somewhat sunny morning, at 10 o’clock, Google CEO Sundar Pichai presented the news. The Internet giant wants the tree not to cover the forest. That’s why he integrates Gemini, his ChatGPT rival tool, into his Web search engine, which exceeds 90 percent of searches. Thus, he tries to ultimately end up using Gemini more than the OpenAI software.

That’s how Pichai started with the benefits of the Gemini-Google search engine duo. “Last year we answered billions of queries as part of our generative search experience. People are using it to make searches for completely new ways and to ask new types of questions, longer and more complex queries, even perform photo searches, and get the best the Web has to offer.


The company added new features to Gemini at Google I/O 2024.

At the conference Pichai said that they are already 2 billion people using Google AI in their various software and services. “An example is Google Photos, which we launched almost nine years ago. Since then, people have used it to organize their most important memories. Today that equates to more than 6 billion photos and videos uploaded every day,” he explained.

And he gave an example of how you can interact with Google Photos plus Gemini. “Show me how Lucía’s swimming has progressed,” she may be asked.

The Google Photos demo, on stage.

“Here, Gemini goes beyond a simple search and recognizes different contexts, from swimming in the pool to snorkeling in the ocean, to the text and dates of your swimming certificates. And Photos brings it all together in one summary, so you can take it all in and relive amazing memories once again. We’ll be launching Ask Photos this summer with more features to come.” This way the user will be able to see when the swimming classes started and the girl’s progress, simply with the list of photos.

Google is in the AI ​​race with its Gemini Pro 1.5. But some developers want something faster and more cost-effective. So they’re releasing the Gemini 1.5 Flash, a lighter model built for scaling. It is optimized for tasks where low latency and cost are most important. 1.5 Flash will be available in AI Studio and Vertex.

Google CEO Sundar Pichai presented the news.Google CEO Sundar Pichai presented the news.

The surprising Astra Project

For its part, Project Astra shows multimodal understanding and real-time conversation capabilities. They demonstrated how with the phone camera you can go around a house and ask Gemini various questions. And differentiate a problem in a paper text from living beings or personal items.

That is, you can ask questions, and the software recognizes via the camera what they are talking about. And the surprising thing is that to go from reading a text in a notebook to something animated in the environment you don’t need to reset or change sessions. He does it completely naturally, like when a person reads written words from a piece of paper and then looks up and talks to a person. For Gemini it is the same too.

Less than a month ago, on April 30, Google announced the availability in Argentina and other countries of Gemini, the app that allows direct access to its artificial intelligence from your cell phone.

Strictly speaking, Gemini already had its Spanish version on the Web, but for two weeks it has been possible to carry this artificial intelligence on the phone as an app.

Hundreds of developers from around the world this Tuesday in the heart of technological changes, Silicon Valley.Hundreds of developers from around the world this Tuesday in the heart of technological changes, Silicon Valley.

As with ChatGPT or Copilot, it is possible to ask complex queries to Gemini, ask for context and follow a conversation on multiple topics, receiving the answers as an elaborate text, unlike the Google Assistant, which usually responds with a link where there is more data. .

The news did not make much noise but it is very relevant. Google has been seeking to regain ground in the AI ​​race against ChatGPT, which for various reasons took the lead. And it does so by incorporating Gemini first into its Android operating system, which is used by the vast majority of mobile phones in the world.

Google I/O, a classic

The first version of Android was announced at the first Google I/O, in 2008., “with the purpose of building a unique and open technological ecosystem for mobile devices, in turn democratizing Internet access for all people,” the company said. It was at the Moscone Convention Center in San Francisco.

Other milestones of this meeting occurred in 2015, when Google Photos was presented so that people can organize and store their photos and videos in the same place.

In 2017, Google Lens was launched, a tool that facilitates the interaction of the virtual world with the real environment. Its creation gave way to new functionalities such as multiple search and Circle to Search, opening doors to more intuitive, natural and practical online searches.

In 2023, it arrived Search Labs, a tool that allows users to interact with experimental Search features that integrate generative AI capabilities and provide feedback to improve Google products. The Search Generative Experience (SGE) is now available in 7 languages ​​and over 120 countries.

Recent posts