>By pawning it off to AI to solve, you have learned nothing, not even how to prompt correctly as test questions are usually formulated well enough that AI doesn't need prompt massaging to get it.
If you got AI to produce a working solution, you solved the problem. In the real world nobody who's paying you cares about the method as long as you deliver results. Students taught to solve easy problems by themselves will be at a big disadvantage in the workforce compared to students taught to solve hard problems using AI.
The part you're missing is that the evaluator already knows the answer. They're not looking that you can arrive at the correct answer, but that you know how to arrive at the correct answer. If "arriving at the correct answer" just means retrieving data from a Baysian database using a Markov chain you have only demonstrated you provide no value in the chain and should indeed get a mark of zero or get recycled.
>The part you're missing is that the evaluator already knows the answer. They're not looking that you can arrive at the correct answer, but that you know how to arrive at the correct answer.
The university evaluator is not the one paying you, the one paying you is your boss or customer. It doesn't matter how highly your university professor thinks of you, if you can't solve difficult problems as fast because your university never taught you to solve hard problems with AI, you're going to be at a competitive disadvantage in the workforce when you graduate.
I don't think the entire purpose of schools is to teach you how to answer some specific set of questions; they want to improve your knowledge and skills in various domains, and the questions are merely a roundabout way to assess that. If you can answer the questions but don't know the knowledge, you're missing the most important part.
For most students the "purpose" of studying computer science at university is to get a better job and make more money. And for the people for whom this isn't the case, they're generally smart and motivated enough to learn the extra details they're interested in by themselves.
There's also no reason to learn to read and write! First graders could just point their phone at some text and have it read to them, or dictate to their phone to achieve the reverse. Why learn to swim, walk, run? Machines can do that for you too!
For now there's plenty of people who are significantly more capable than AI models. Someone who fully outscources to machines will never join that club.
You have to evaluate students on their own skills before you continue their education, because at some point AI models won't be able to help them. Anyone can use some LLM to pass the first few months of undergraduate engineering disciplines, but if you got through that and haven't learned a thing, you're completely fucked. Worse, you won't even notice the point at which AI starts to fail until you get your test results.
Once the above is not true anymore, education is pointless anyways. However for now AI can at best replace the worst performers and only in some areas.
>You have to evaluate students on their own skills before you continue their education, because at some point AI models won't be able to help them.
If at some point AI models won't be able to help them, then give them assignments that reach the point where AI alone isn't enough, so they'll only be able to solve them if they learn whatever is necessary. This is what's meant by "making assignments harder". Students who learn to solve harder problems with AI will be more competitive in the workforce than students who only learn to solve easier problems by themselves. Because AI already allows people to solve harder problems than they could unassisted, but it's a skill that needs to be learned.
As an example, with AI, it'd be a reasonable assignment to ask students to write a working C compiler from scratch. Without AI that'd be completely beyond the reach of the vast majority of students.
That's great for autodidacts, but most students will be stumped by a complicated problem if you don't slowly walk them up an incline first.
Also what do you think is an appropriate assignment for first graders where "AI is not enough"? Are we supposed to give them problems meant for engineering majors?
The things you are saying at best apply to a few select areas of education and you are hyperfocusing on them. What you are neglecting is that a lot of education focuses on teaching tool use: reading and writing is a tool, CAD software is a tool, AI is a tool, even language is a tool. For many people the best way to learn to use tools is being taught by another human being. That human being has to evaluate their progress somehow. If a first grader uses their phone to have text read to them, this tells me very little, except maybe that they can at least understand spoken language to a degree.
Using LLMs effectively, especially without essentially becoming the LLMs meat-puppet, requires a set of skills many 10th graders still struggle with. Skills like putting what you mean into words, extracting meaning from text, and thinking critically about the information you are fed.
Finally there's the matter of philosophy, ethics, and politics, which also happen to be on the curriculum in some places. Are you going to let a LLM argue for you? If you have never learned to evaluate your own beliefs and turn them into something coherent that you can communicate to others, and instead let the LLM argue on your behalf, then congratulations: you have just un-personed yourself because you refused to let others help you become an actual individual in society. You're a sack of meat hooked up to a machine. ... It's probably obvious I feel strongly about this in particular.
At the end of the day, we can at least agree that people should learn to read and write? For now?
There are a number of assumptions in what you say that don't necessarily hold.
1) That school is simply about landing a job.
2) That there is a value in students knowing how to have the AI do problems for them.
3) That follow-on effects of manually solving difficult problems is discountable compared to the direct output of the work.
I would say you're absolutely correct in that people pay for the result and they don't really care how you got there. But that's a pretty shallow rationale which overvalues the ability to be the conduit from the source of requirements to the final output and undervalues the individual ability to think for one's self when faced with the challenges of technological, geopolitical, or simply uncontrolled personal circumstances.
"The conduit", who you seem to be believe is the one with marketplace advantage, is exactly the person I would say is the most vulnerable. Not because getting the AI to produce demands is without value, but that its quickly becoming a task that doesn't need the intermediary at all. Those magicians that can prompt/agent/mcp/etc their way through to positive successes are actively being challenged by the very AI producers which our conduits people now depend on. Removing the need for intermediaries would be a great competitive advantage for any AI vendor able to achieve it. But insofar as intermediaries create output from LLMs, they'll not be very well differentiated: the common wisdom tends to be the output, lest the AI be accused of hallucination or being overly supportive. But when everyone is using AI for everything the opportunities will be in arbitraging that which is missed by common wisdom... filling in the cracks that any responsible AI would simply never venture to consider. Our conduit-person will be at a decided disadvantage because it takes real thought to know when it's best to color within the lines, and when it's best to not do so.
And that's really it. A good education is teaching you about the process of thought and becoming practiced at thinking. I would expect a better educated, thinking person to more easily adapt and make use of technology such as generative AI to solve problems more so than a person that just knows how to deal with today's prompting needs. The thinking person will be able to understand the bigger picture to better get a consistent and high quality series of results than the person just getting results as needed.
And that's really it. The output of a good education is you as a thoughtful & knowledgeable person: the output on the page is merely a means to that end. But if you focus solely on the answer on the page and the only important thing... you're really evaluating the AI, not the person that acted as intermediary.
In otherwords, if the person following your advice comes for a job, simply ask them which AIs they used in the interview and then just sign contracts with those vendors instead... you'll get better bang for your buck cutting out the middleman.
> Students taught to solve easy problems by themselves will be at a big disadvantage in the workforce compared to students taught to solve hard problems using AI.
What hard problems could students solve with AI that requires the students to be especially trained? It seems you are thinking of GPT-3 style "prompt engineering". That's a thing of the past. Students can just copy the assignment into the LLM. They don't need to be taught to do that.
If American billionaires couldn't exist then America would be even poorer and underdeveloped than Europe, the entire tech industry wouldn't exist, and it'd be entirely at the mercy of China. Because nobody's going to start a business in a country that violently confiscates their wealth just for being successful. The envy of people like yourself is a deep moral illness that destroys civilizations if left unchecked.
Have you actually spent an appreciable amount of time outside of the US? Europe isn't the place of destitution and squalor you imply. I highly suggest it, to widen your perspective at least. Maybe then you'll see it's quite the reverse in many cases.
It's the exact same thing they did with Google BigQuery, which initially was an absolutely amazing piece of technology before they smothered it with more and more limits and restrictions. It's like they're putting SREs first, customers second.
All those apply to OpenAI+Codex too, but they're far more generous with limits than Anthropic, and with granting fresh limits to apologize when they fuck up.
Especially since Codex faced the same issue but the team decided to explicitly default to only ~200k context to avoid surprises and degradation for users.
Such great strides in technology, their standard of living was so high that their president was literally shocked just by the abundance of food in a normal western grocery story.
Massively capitalist in what way? They have a long history of price controls and nationalization and their current military junta is trying to nationalize the uranium industry. How could you even imagine a free market operating in a country that has a revolving door government that alternates between military dictatorships and transitional officials?
It has been good enough for long time. CXMT has long made DRAM and NAND modules that are just as good as anyone else's, sometimes for half the price. The only thing they can't match is flagship products of Samsung.
However because of that, prices for Chinese-made DDR5 have risen in China (and globally) along with the prices of all others, just with a slight delay.
Iran is already quite dependent on the PRoC as a trading partner; using RMB as their primary currency for these payments would further increase their 'counter-party risk'. That said, RMB exchange-rate manipulation may also be a significant factor in their decision.
If you got AI to produce a working solution, you solved the problem. In the real world nobody who's paying you cares about the method as long as you deliver results. Students taught to solve easy problems by themselves will be at a big disadvantage in the workforce compared to students taught to solve hard problems using AI.
reply