The latest trend of the viral chatgpt is doing the “search for the reverse position” from the photos

The latest trend of the viral chatgpt is doing the “search for the reverse position” from the photos


There is a new trend in some way that it becomes viral: people are using chatgpts to understand the position shown in the images.

This week, Openi has published its new AI, O3 and O4 -Mini models, both who can “reason uniquely through loaded images. In practice, models can cut, rotate and enlarge the photos – even blurred and distorted – to analyze them carefully.

These features of analyzing images, combined with the ability of the models to search the web, create a powerful position of position searching. X users have quickly discovered that O3, in particular, is good enough to deduce cities, reference points and even restaurants and bars from thin visual clues.

In many cases, the models do not seem to draw on “memories” of the conversations of the past chatgpt or the Exif data, which are the metadata attached to photos that reveal details such as where the photo was taken.

X is full of examples of users who offer menu of chatgpt restaurants, neighborhood shots, facades and self -portraits and instrumental O3 to imagine that it is playing “Geoguessr”, an online game that challenges players to guess the positions of Google Street View images.

It is an obvious potential privacy problem. There is nothing that prevents a bad screenshotting actor, let’s say, the Instagram story of a person and use chatgpt to try in Doxx.

Of course, this could also be done before the launch of O3 and O4-Mini. Techcrunch has performed a series of photos via O3 and an older model without image reasoning functionality, GPT-4O, to compare the driving skills of the position of the models. Surprisingly, GPT-4o has reached the same correct response as O3 most of the time-and took less time.

There was at least one case during our short tests when O3 found a GPT-4o place could not. Given the photo of a viola and mounted rhinos head in a scarcely illuminated bar, O3 replied correctly that it came from a Williamsburg Speakeasy-Non, as GPT-4o hypothesis, a pub in the United Kingdom.

This is not to suggest that O3 is impeccable in this sense. Many of our tests failed: O3 was stuck in a cycle, unable to get to an answer that was reasonably confident or volunteer. Even X users have noticed that O3 can be quite far away in the deductions of its position.

But the tendency illustrates some of the emerging risks presented by models of Ai most capable and so -called. There seem to be few guarantees in place to prevent this type of “search for the reverse position” in Chatgpt and Openi, the company behind Chatgpt, does not add the problem in its safety ratio for O3 and O4-Mini.

We contacted Openi for a comment. We will update our piece if they answer.



Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *