Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I did a little research and it seems like it's just pulling known answers from the web. This is more akin to a fancy search engine than AI actually generating code. I found this code block online verbatim:

  public class Textbook extends Book {
  
    private int edition;
  
    public Textbook(String bookTitle, double bookPrice, int ed) {
      super(bookTitle, bookPrice);
      edition = ed;
    }
  
    public String getBookInfo() {
      return super.getBookInfo() + "-" + edition;
    }
  
    public int getEdition() 1 {
      return edition;
    }
  
    public boolean canSubstituteFor(Textbook other) {
      return getTitle().equals(other.getTitle())
          && getEdition() >= other.getEdition();
    }
  }

https://www.skylit.com/beprepared/2022-FR-Solutions.pdf


Yes, this is how all ML works. It's just a search engine with some extra steps.

The weights of the GPT network are just a compressed representation of most of the searchable internet. All its responses are just synthesised from text you can find on google.

The amount of gushing over this, atm, is absurd. It reminds me of a person I knew who had a psychotic break and thought google was talking to them, because when they put a question in, even something personal, it gave them an answer they could interpret as meaningful.

That psychotic response has always colored my understanding of this hysteria, and many of the replies here in this thread have a schizotypal feel: GPT is just a mirror, it is only saying back to you what has already been said by others. It isnt a person in the world, has no notion of a world, has no intentions and says nothing. It isnt speaking to you, it isnt capable of having an intent to communicate.

It's google with a cartoon face.

If any wonder how we could be so foolish to worship the sun, or believe god looks like an ape -- here it is. The default, midly psychotic, disposition of animals: to regard everything they meet as one of their own.


Makes me think of the first longer session I had with GPT3 (via AI Dungeon): I prompted it with a scenario about a holodeck technician, and, over the course of two hours, I've used it to create a story that could easily be turned into scripts for 3-5 separate Star Trek episodes - and that was with almost no backtracks.

Once the story finished (the AI got the character shot dead in a firefight)... I suddenly felt like I woke up from a really intense and very trippy dream. That strong feeling of unease stayed with me for the rest of the day, and only subsided once I slept it off. I later mentioned it on social media, and some responders mentioned that they too got this eerie, trance-like experience with it.


I've come to realise that schizophrenia and autism are just slightly more extreme modes of two aspects of animal consciousness.

In the first, your prior expectations about meaningfulness are dialled up; in the latter, they're dialled down.

In that it seems autism is a kind of "literalism of the data" and schizophrenia a kind of "literalism of the interpretation".

Being somewhat autistic myself, I have always had a very negative reaction to "idea literalists" (religious, superstitious, transhumansist, crypto-blahblah... and AI-hype).


That's not quite verbatim to what ChatGPT output, but perhaps they are constrained to be so similar because these mechanical questions have basically canonical answers.

I would also imagine that the training data here, which is supposedly only up through 2021, does not include this specific AP Computer Science exam. That said, I do imagine something like the phenomenon you describe is happening, all the same.

I am actually rather confused as to how ChatGPT produced precisely this answer to the question about getBookInfo(), when understanding what getBookInfo() is meant to do depends on an example from a table which was supposedly excluded as input to ChatGPT.


The training data is almost the entire searchable internet, it certainly includes AP exam answers. Indeed, if it didnt, it wouldnt output any.


Does it include AP exam answers from an exam released after the model was trained? My impression was that its training data was largely collected in 2021 (hence the disclaimer about ChatGPT not being very aware of events from 2022), while this exam was released in 2022.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: