Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'd love to see the statistics on the daily number of Google searches since OpenAI (ChatGPT in particular) came to the fore this year.

Anecdotally, I now use ChatGPT for at least 25-50% of the queries that I previously would have had no other channel for other than a search engine.

If I was in charge of Alphabet I'd be starting to worry. This move makes them look a bit desperate.



I don't see how that could even register. We use OpenAI at home (both me and my wife) but everything ChatGPT spouts must, invariably, be verified.

It's not even "trust but verify" but "Oh yup, I didn't thought of that. However it may be a lie because ChatGPT is a pathological liar, and as I cannot possibly trust a lying tool, I'll now verify" (I know, I know, there's a discussion regarding nomenclature: "lying" / "hallucinating" or whatever. But anyone who's actually using ChatGPT knows what I mean).

Basically the output of ChatGPT, for me, goes directly into Google / Wikipedia / etc.

The one case where I can use the output of ChatGPT directly is when I translate from, say, english to french or vice-versa and I know both languages well enough to be able to tell if the translation is okay or not.

Those believing they can use the output of ChatGPT without verifying it are basically these lawyers who referenced hallucinated cases to a judge.

As another person as commented: it didn't even make a dent in Google's search requests and that is no surprise.


It starts with the power users. GPT4 has certainly had a big impact on my Google searches.

For example, all my tech support searches are now GPT4. Those are painful on Google. There's no need to verify with a Google search, since you can just try out what GPT4 says.

Concrete example: I use it all the time to help me with Excel. What it suggests is nearly always correct. It has turned me into an Excel power user within a few weeks.

You need to develop a sense for what it's likely to be correct on, but once you do it's insanely useful. Simple rule of thumb: if you think you'd find a direct response to your question by wading through pages of ad infested Google results, it'll definitely work great on GPT4.

The way I recall it, it took multiple years for Google to go from a secret power user thing to displacing Yahoo and Altavista for the broad user base. And that was at a time where being online in itself was sort of an early adopter thing.

Anyway, I guess my point is, I would be worried if I was Google, and ignore this tech at your own risk...


ChatGPT remixes info from beyond the first page of Google search. That’s the value. If you ask it for a list of nice nature spaces in Tokyo, like I just did, it returns 12 spots that all seem appealing. Already, that’s more information density than a Google Search. But now I have to go look up if these actually exist (this isn’t the sort of mistake it usually makes though), where it is, the hours, admission prices etc. So that’s going to be a few Google searches for this question. Of course, I’ll have to actually go to the sites for these gardens, if they exist, because you can’t quite trust Google Maps’s accuracy either - hours and opening days can be off, especially around the holidays, when I’d like to go to Tokyo. One ChatGPT query -> several Google searches.

If for this text output, ChatGPT also linked me directly to the gardens’ sites, scraped the info from the live site, and summarized this - that would actually save a ton of time. Google could have a leg up bc it has a knowledge graph, but so does Microsoft. This requires a lot more than training an LLM - this requires it to be an actual product, not a tech demo like it is. A chat with an agent that occasionally lies is a terrible UI.

I think there’s great scope for UI innovation here. But such an experience might be pretty expensive in terms of compute - lots of LLM queries and extra lookup systems. Someone who does this hard integration work and is willing to spend a lot of resources per query will deliver a delightful, time-saving user experience, and can probably charge for it. And that may be a great value-prop for local AI - you can give it tons of resources to solve your particular problem. As I see it, mass market LLMs that are provided for free will never do this extra work for you. ChatGPT might be in a good position bc it already has a ton of paying customers that it can continue to draw a wall around. Their early Nov announcement might be something along these lines.


Doesn’t ChatGPT’s Browse with Bing option do that? It’s definitely provided me with inline links to its browsed sources.


I don't know, I make plenty of queries that don't need verification. I would say the majority.

Write a polite email to X saying Y -> doesn't need verification.

Rewrite this in formal English for a grant application -> doesn't need verification.

How to tar gz a folder in Linux? -> doesn't need verification. When I get the answer, I will probably think "oh, sure, it was czvf". And even if I didn't know what the arguments were, I would know enough to know that a tar command isn't going to delete my files or anything like that, so I would just try and see if it worked.

Write Python code to show a box plot with such and such data -> doesn't need verification, I'm too lazy to write the code myself (or look into how seaborn worked) but once I see the code, I can quickly understand it and check that it does what it should. Or actually run it and see if it worked.

Brainstorming (give me a bunch of titles for a paper about X/some ideas about how I could do Y) -> doesn't need verification, I can see the ideas and decide which I like and which I don't.

I get that if you're a journalist, a lawyer or something like that, probably the majority of your ChatGPT queries will need verification, but that's definitely not my experience... probably because I don't often ask ChatGPT for things that I don't know, most of my use is either for things that I could do myself but require a time investment and I'd rather ChatGPT does them in a few seconds, or for brainstorming. Neither of those require a Google search at all.


I still have no clue how so many people in this thread are relying on it for code just based on the code output I get from it.


I find GPT-4 pretty amazing at coding myself. I have learnt basic R language + a particular package within it, enough to solve an emergency issue that barred us from launching a product for a coding competition within 24 hours of only having heard of the language before, just barely meeting our deadline. The team was amazed and I could never have done it without GPT-4.

Otherwise I'm regularly coding with it as an assistant while it's not always 100% correct, it's often decimating the time especially when I can request things like a website skeleton layout and it just seamlessly doing it all using CSS flow layouts and whatnot. I ask it to implement a Javascript library for a chat-style interface with bubbles and just bam it's there, even adapted for existing code.

I'll still have to debug and I _will_ still find issues occasionally but the win is still often enormous for me.

ChatGPT 3.5 though. That's mostly just a toy, yes.


I love it (gpt4) for code. At least for Python and Typescript.

It doesn’t always get it perfect, but it gets a lot right. Having enough experience in the field makes it easy to know when it’s right or not. Then you either ask again with more context and information, or do it yourself.

Overall, it’s definitely increased my productivity.


They're using 4. The quality difference is huge for coding. Nearly a different product.


ChatGPT is like Google in the early days. Some people just never learned how to use a search engine. Other people very quickly did.


I don’t use chatgpt or LLMs for anything factual at all given the workflow you’ve described of asking a question and then needing to separately search all results. But they are very useful for framing documents, summarizing things, etc.


I use it with Browse with Bing and it works quite well for keeping up with research on a diverse set of topics - CS, AI, economics, neuroscience etc. The citations help you verify quickly.

And this is besides the other stuff that search ones can’t do directly but it does quite well. It has definitely cut web searches for me.


If you want more factual answers why not use Bing with ChatGPT?


The types of queries where gpt is useful (or I hope will soon be useful) to me are those where google themselves have "destroyed the internet.^"

If you want to basic information about a physical exercise, a recipe, a travel destination or such... All the content in Google seems to be content farmed. Written by people who don't know, quickly, by copying similar articles. It's just a buggy, manual version of the proverbial gpt anyway. At this point, I may as well just go direct.

If I want to know about doing weighted crunches instead of sit ups, Google results already represent all the downsides of using gpt. The got UI is nicer. You can also narrow in on your questions, push back and generally get the same level of content in a better package.

Maybe the old chestnut of "computerized recipe book" will finally be solved.

^Ra Ra!


> All the content in Google seems to be content farmed

you are just forgetting to add word "reddit" at the end of your query..


Idk... Maybe... But Reddit seems to hold info pretty poorly. A lot of the subs/posts and incredibly dumb.

The good information is there, often, but finding it is hit and miss. Until relatively recently, unofficially archived gives web forums were still available. Reddit search is a downgrade, imo.

In any case gpt can do reddit pretty well.


> In any case gpt can do reddit pretty well.

I am not sure I trust gpt, hallucinations are still very common.


That doesn’t help and is just as astroturfed at this point. Companies caught on to this years ago


that's very far from my experience, mods and voting community do great job in ranking good posts and comments. Maybe you can give example of your query which you are not satisfied with results?



obviously many try to manipulate, the question is in final results, if they can really change picture dramatically.


Casual users don't care about ChatGPT or any other smart chat app because they are used to Google Search or classical search bar type web search for decades now.

If chat apps take over, I think they will first taker over desktop computers because on mobile it is easier to type a few keywords in the web search bar and get 5 or 6 relevant web results than it is to chat with a chat bot for like 5 minutes.

Informational quires are most likely to be more useful in chat apps than in the classical web search but navigational and transactional quires are here to stay on Google or whatever search engine comes about.


I was going to say that mobile can just chat on the mic to the gbt but then I remembered speech to text is an apparently hard problem that no one has made progress on in 15 years


Actually, OpenAI just made the progress you're asking for. Try the voice conversation mode in ChatGPT app (if you can, it's still a beta testing version IIRC). This one is to the likes of Google Assistant as GPT-4 is to GPT-2. Huge quality jump - not only it doesn't suck (like e.g. Google Assistant does), it works, and absurdly well at that. ~1% error rate for me in English, which isn't even my first language, on the go, in rain, next to noisy street...


Google reportedly has $118 billion cash on hand. I think they would have happily invested more than $2 billion in an OpenAI competitor if they thought more money than that would matter.


I'm struggling in finding the source, but I read some in depth analysis that reports that bing only got 0.53% market share since the launch of chatGPT integration. Apparently they are cannibalising small players but not google.


There is no difference whatsoever, you can look up some Similarweb stats on this.


the supermajority of humanity doesn't know what chatGPT is useful for, so google's total amount of searches are almost certainly the same. you could only detect the difference for, say, searches that would return stackoverflow results


...which I suspect has a large overlap with ad blocker users and, even those that don't, probably are not as sensitive to online ads as average Joe.


Also growth in Bing searches with Bing Chat (backed by ChatGPT). I even switched my mobile browser from Brave to Edge to use the integration.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: