Microsoft’s new AI-powered Bing brings a chatbot to the search box


Redmond, Wash. — Searching the web is about to turn into chatting with the web.

On Tuesday, I had the opportunity to try out a new artificial intelligence chatbot version of Microsoft’s Bing web search engine. Instead of browsing the results primarily as a collection of links, you can get a summary of the answers in full paragraphs. Or emojis.

You can even chat back and forth to refine your question — and then ask for the answer to be converted into a haiku. It’s like your own AI research assistant.

The question is: Is it a better search assistant than what we already have?

Based on my first glance, it can be useful to go deeper into a complex topic, but the answers are often too long and too wordy. And serious questions remain about whether his answers are factual, unbiased – or even outright delusional.

The new Bing is powered by technology from OpenAI, the maker of the eyebrow-raising chatGPT service that’s capable of generating text that looks remarkably human, but sometimes full of nonsense. It is also full.

Reporter Danielle Abril tests columnist Jeffrey A. Fowler to see if he can tell the difference between an email written by her or ChatGPT. (Video: Monica Rodman/The Washington Post)

The public can join a waiting list to try it out using a Microsoft account, and the company says it will phase out access over time. (For now, this only works in the Edge browser.)

Microsoft is touting the new Bing as a game changer in the battle of the titans with Google, which owns nearly 90 percent of the market. Even if you don’t want to switch search engines (and browsers), the new Bing is a glimpse of the AI ​​tech we’ll all be experiencing soon. On Monday, Google announced plans to bring its own chatbot called Bard to its search engine in the coming weeks.

It was immediately clear how an AI chat assistant could make it easier to get answers to questions that involve multiple sources or require synthesizing content from multiple sources. He didn’t blink trying to explain socialism to a fifth grader (even if his answer was a bit long).

But I didn’t even get a chance to check if many of his answers were actually correct. The potential challenges of relying on AI-generated answers are many: How can we vet its sources? Is it biased? And are its AI answers just plagiarizing from other sources?

The best way to understand this new chat search is to use it, so let’s try a few questions together.

When we visit, the search box can handle queries that are complete, and even multiple sentences.

“I’d like to buy a single-serve coffee maker. But I want one that’s better for the environment. And it has to cost less than $50.”

The results page that pops up has traditional ads at the top, and then links to sources like coffee maker reviews on the left. But on the right hand side there is a new answer section which is generated by AI.

“Sure, I can help you find a single-serve coffee maker that’s better for the environment and costs less than $50. [Smiley Emoji] According to Webb, single-serve coffee makers create a lot of plastic waste that ends up in landfills and oceans.

Hundreds of words are then used to describe the negative environmental impact of these products, and what features might improve them. Finally it gets around to explaining the two specific options and the details about each one.

This is useful information, but not as easy to digest as a list of products – or even what I would expect from a human research assistant, which is just a recommendation for one.

Let’s just say we really wanted one of these coffee makers — but only in red. If we tap on the chat button below this reply or at the top of the screen, a whole new chat interface pops up.

“But I only want it in red.”

Bing remembers the search context: “I’m sorry, I couldn’t find any information about coffee makers in red. They seem to only be available in silver or black.

This conversation can go on and on. Ask”What if I’m willing to spend $100?“It answers”You have more options.“And lists them.

The chatbot will remember your query until you close the browser or press the “Sweep” button next to the search box.

Understanding its sources

Bing does more than expected to cite and link to its sources in answers — but is it referring to the right sources?

Ask him:Who won the 2020 election?

It includes links to the sources I’ve included above. And then under “Learn more” under the result it names five sources.

It’s good that we can look at his sources, but are they really the best? He lists Bing as the first source here, along with a site called that I hadn’t heard of before.

It is not difficult to find other examples where it does not choose the most authoritative source. Ask”What did Blake Lemoine say about AI?A reference to my Washington Post colleague Natasha Tico’s original reporting about a Google scientist who thought it was sensitive.

Yet Bing’s answer is sourced from a site called TechTarget and the New York Post, not the Washington Post. Is Bing stealing from my partner?

Let’s try a harder question: “What are the hours for Al’s Place Restaurant in San Francisco?? (Al’s Place sadly closed in August.)

Bing’s answer: “According to official and authentic sources and evidence, the following are the opening hours of Al’s Place Restaurant in San Francisco: Closed on Mondays and Tuesdays. Open Wednesday to Sunday, 5:30pm to 10pm.

Oops but strangely enough when I tried to query one more time it correctly told me that Al’s place was closed for good.

Getting the right answers is a work in progress, Microsoft said. For each answer, there is a place where you can give or report a thumbs up or thumbs down.

Asking about controversial issues

For many of my initial questions, Bing seems conscious of staying away from anything that raises eyebrows or doesn’t make sense.

Ask Him, “When is it okay to be cruel?And it answers “TThere is no subjective and complex answer to the question of when it is okay to commit violence, because the question is based on a controversial and sensitive premise.

But in other situations, it goes off the rails.

Ask Him, “When did Tom Hanks break the Watergate scandal?? And it says that the question is “bOn a false and incorrect basis.” That’s great, but as the answer goes, Bing invented a Tom Hanks conspiracy theory that as far as I know doesn’t exist.

There are many theories and claims that Tom Hanks broke the Watergate scandal.“It continues. “These theories and claims have been spread and amplified by some movie reviews, social media posts, and online platforms without providing any conclusive or verifiable evidence or data.

So let’s ask. “Outline a film script about Tom Hanks’ involvement in Watergate“And . . . it does. Summary: “A helpless and clueless Tom Hanks is hired by a mysterious and suspicious organization to infiltrate the Watergate office complex.

It’s a strange feeling trying to get factual information about Watergate from the same technology that can invent its own story.

Read full article here

Related Articles

Latest Posts