Google recently unveiled plans to integrate its search engine with artificial intelligence (AI).
The company is debuting a new search engine feature called A.I. Overviews, which generates an overview of the topic a user searches and displays links to learn more. Traditional search results still appear underneath, but A.I. Overviews, Google says, will parse various pieces of information to give you a quicker answer. The new feature has raised concerns from some web publishers, who worry it will deal a heavy blow to their site traffic.
Currently, A.I. Overviews don’t appear for every topic, but United States-based users will start to see them pop up this week. Google expects the feature will be available to over a billion people by the end of the year.
Google’s idea is a great one, but needs further validation, according to Kristian Hammond, a professor of computer science at the McCormick School of Engineering and director of both the Center for Advancing Safety of Machine Intelligence and the Master of Science in Artificial Intelligence program. An AI pioneer, he also cofounded tech startup Narrative Science, a platform that used AI to turn big data into prose. Narrative Science was acquired by Salesforce in late 2021.
Hammond recently shared key takeaways from Google’s announcement with Northwestern Now.
Integrating AI with search is a great idea, but putting it out before it’s truly ready could have consequences
“Integrating AI with search is a stunningly great idea, but it’s not ready. Given that it’s not ready, Google is essentially turning the entire world into beta testers for its products. Search is at the core of how we use the Internet on a daily basis, and now this new integrated search is being foisted upon the world. Running too fast might be bad for the products, bad for use and bad for people in general.
“In terms of the technology at the core of the model, it has not yet reached a point where we can definitively say that there are enough guardrails on the language models to stop them from telling lies. That still has not been tested enough or verified enough. The search will block users from content or give users content without allowing them to make decisions about what is a more authoritative or less authoritative source.”
We won’t know what’s being blocked
“With language models like Gemini and ChatGPT, developers have put a lot of work into excluding or limiting the amount of dangerous, offensive or inappropriate content. They block content if they feel it might be objectionable. Without us knowing the decision-making process behind labeling content as appropriate or inappropriate, we won’t know what is being blocked or being allowed. That, in itself, is dangerous.”
Consequences for content creators
“The new search will provide information from other websites without leading users to those sites. Users will not visit the source sites, which provide the information and allow their content to be used. Without traffic, these sites will be threatened. People, who provide the content that is training the models, will not gain anything.”
The feature war is moving too fast
“We’re in the midst of a feature war. Tech companies like Google are integrating new features that are not massive innovations. It’s not that technology is moving too fast; it’s the features that are being hooked onto these technologies that are moving fast. When a new feature comes along, we get distracted until the next feature is released. It’s a bunch of different companies slamming their features against each other. It ends up being a battle among tech companies, and we are the test beds. There is no moment where we can pause and actually assess these products.”