> At its heart, education is a project of guiding learners to exercise their own agency in the world. Through education, learners should be empowered to participate meaningfully in society, industry, and the planet.
I agree but I have never seen an education system that had this as a goal. It's also not what society or employers actually want. What is desired are drones / automatons that take orders and never push back. Teaching people about agency is the opposite of that.
We are so stuck in a 19th century factory mindset everywhere, GenAI is just making it even more obvious.
Employers want a high-agency leadership class and drones for the individual contributors.
There are systems that nurture agency and leadership. They are the private schools and the Ivy League universities. And many great companies.
Most people don't want to be leaders and be judged based on impact. They want to be judged based on effort. They followed all the rules when writing their essay and should get an A+ even though their essay is unconvincing. If they get a bad mark, their response is to create a petition instead of fixing the problems.
Maybe we should attack our culture of busywork and stop blaming educators for failing to nurture agency.
the education i received in germany did have this goal. the teachers had this goal, and i have the impression that the teachers and schools my kids go to have this goal as well. i can't say how universal that is, but it the opposite is not universal either.
the problem is that the goals are not effectively implemented. maybe it's more a dream than a goal, because the teachers and schools don't know how to actually reach that goal.
meaningful participation in society is often reduced to the ability to get a job by those outside of school, so you are right about employers. at least the large ones. unfortunately that works against them, because the current generation of juniors doesn't even want to learn anything. they are drones that just want to get paid, but are not motivated to learn what they need to do their job better.
Just yesterday, I talked to a neighbor who has two kids attending a local school in Mitte. He told me that the children are constantly indoctrinated into group conformity, obedience to authority, and fear of "wrong-think," with a good splash of wokie-talkie on top of it. To me, that sounds like a complete erasure of agency. Schools must provide knowledge, not override the nurture given by parents.
I have personally observed how locals are bullied by overseas guests and choose a delusional escape into virtue signaling rather than defending themselves. I consider German upbringing to be that of a defeated people.
I consider German upbringing to be that of a defeated people
i don't know what you are trying to imply here. how should the feeling of defeat affect the upbringing? (i mean,i am sure there would be an effect, but how would that look like?)
what i can tell you is that the sentiment i experienced was not defeat. after all this is neither our, nor our parents, (and for the current generation also not their grandparents) experience. the feeling we were taught was that of embarrassment, of how could we let that happen and consequently the need to understand how we can avoid that from ever happening again. except for a minority or right wing sympathizers that we keep a close eye on.
I think that the Allied victors laid the foundation of the current German education system on initial denazification and subsequent extreme pacification, to such a degree of impotence that people refuse to defend themselves even when they are fully capable of neutralizing a criminal, preferring to become victims rather than use force.
I’ve seen multiple instances of robberies where the attacker was a head shorter and could have been easily stunned, or worse, with a single hit, yet people gave away their valuables because even the thought of using violence is taboo. Of course, the police always say, “Just file a complaint,” which never results in anything. It’s not a joke: even if violence is used purely to stop a criminal, the police will prosecute you, lol. I’m not American, but I like the idea that one could defend themselves and their property using all means necessary.
While you are likely correct about systems, I have known quite a few individual educators who have the goal of helping their fellow humans learn about their agency in the world.
I attended a public school system which, while at times did falter in various ways, did a fairly good job meeting its stated mission that was more or less exactly that.
I witnessed far more personal political pressure and cajoling than corporate/future employer. Where I went to school the pressure on schools was usually from parents, students, and local groups concerned with civil matters. I had (until recently) indirect (and sometimes direct) exposure to this because one of parents was an educator and a senior member of their department in an adjoining district to the one I attended.
Where I went to college, it was always very clear to me what was shaped by industry vs. research and academia. I went to a research university for an uncommon hard-science degree and so there was a lot of employer interest, but the university cleverly drew a paywall around that and businesses had to pay the university to conduct research (or agree to profit sharing, patent licensing, etc). There was a clear, bright line separating corporate/employer interest from the classroom.
> "Youngsters who are immersed in this popular culture are accustomed to large doses of passive, visual entertainment. They tend to develop a short attention span, and expect immediate gratification. They are usually ill equipped to study mathematics, because they lack patience, self-discipline, the ability to concentrate for long periods, and reading comprehension and writing skills."
For context, the essay is from 1996. You could have told me this is from the current year and I would have believed you.
Ultimately those are tools and I think the goal is to educate students to use them properly. Also because I don't expect the knowledge paradox to disappear anytime soon with these models.
The cat is out of the bag. Kids will use AI to write papers, learn topics, cheat on take-home tests, etc. Only a completely closed-off environment with no access to the internet could prevent this.
The best option is to change the incentives. 95% of kids treat school as a necessary hurdle to enter the gentry white-collar class. Either make the incentives personal enrichment instead of letter grades or continue to give students every incentive to use AI at every opportunity.
> Only a completely closed-off environment with no access to the internet could prevent this.
Okay, then we should do this.
> Either make the incentives personal enrichment instead of letter grades
This just straight up does not work.
The incentive for not being obese is perhaps the most perfect incentive ever: you live a happier life, with a greater quality of life, for longer, with less societal friction. It's the perfect poster child of "personal enrichment".
And yet, obesity is not declining. How is this possible?
Because internal locus of control as a "solution" for systemic issues just does not work. It doesn't maybe work, it doesn't sometimes work, it never works. If you don't address institutional issues and physiological issues then you're never going to find a solution.
What I mean is, kids use AI because it's easy. It's human nature to take the path of least resistance. This has a physiological, a biological, component to it. If we're just going to be waiting around for the day people aren't lazy then we're all gonna die.
Schools are artificial environments by design. They're controlled environments by design. If we leave children to their own devices, they grow up stupid.
The problem is that education is a cumulative endeavor. We don't give calculators to kindergartners trying to learn the number line. Why not? Because if you don't have the neural connections to intuitively, and quickly, understand the number line, then Algebra is going to be a nightmare.
AI can enhance learning, if and only if the prerequisites are satisfied. If you use AI to write but you don't know how to write, then you're going to progress on and struggle much more than you should. We carefully and deliberately introduce tools to children. Here's your graphing calculator... in Algebra I, after you've already graphed on paper hundreds of times. You already understand graphing, great, now you're allowed to speed it up.
We, as adults, are very far removed from this. We have an attitude of "what's the problem" because we already have built those neural connections. It's a sort of Lord Farquad "some of you may die, but that's a risk I'm willing to take" approach, but we don't even realize we do it.
> GenAI is a threat to student learning and wellbeing.
This blanket dismissal is not going to age well, and reads like a profession lashing out.
With the right system prompt, AI can be a patient, understanding, encouraging, non-judgemental tutor that adapts and moves at the student's pace. Most students can not afford that type of human tutor, but an AI one could be free or very affordable.
> AI can be a patient, understanding, encouraging, non-judgemental tutor
Groan... no it can't. It can simulate all those things, but at the moment, "AI" can't be patient, understanding, and whether judgemental or non-judgemental.
OK it can be encouraging. "You're one good student, $STUDENT_NAME!" (1).
I can say the exact same thing about you or anybody else. You can’t be patient, understanding, encouraging, non-judgmental tutor. You can only simulate it.
I really can’t understand why people don’t understand this. What am I missing?
Philosophical zombies are supposed to be a thought experiment to demonstrate that solipsism and nihilism are stupid, not a rhetorical device to equate human minds to linear algebra statistical parrots.
Whether the AI is patient, understanding, etc., is entirely up to the person interacting with it to decide. Just like they decide this when interacting with people. You can never know the internal state of the other in a conversation so it is up to you to model it and if modeling it is best done with human metaphors then use human metaphors.
If you are using the most commonly available AI and have an average ability of perfecting a search term, right now AI is not a particularly useful tool in learning anything. It is far too inaccurate to learn anything challenging. The key term here is could, and yes it is possible but there is nothing yet to say we shall get there.
It's not a blanket dismissal, it's a fact in context. It should read like a profession lashing out - that's what it is.
AI has enormous upsides and enormous downsides. The "you're going to look so dumb in the future" dismissal is lazy. Inevitability does not make something purely beneficial.
It's a fallacious line of thinking that's disappointingly common in tech-minded people (frequently seen in partnership with implications that Luddites were bad or stupid, quotes from historical criticisms of computers/calculators, and other immature usage of metaphor).
I'd respect the statement more if it acknowledged that AI had some benefit, or potential benefit in the future, but they did not want to use it currently.
"You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so.” -- someone wise, or was he?
Sarcasm? We actually weren't allowed to take any kind of calculator into any of our advanced maths exams in University (and I'm talking just 15 years ago, not when they were newfangled things).
You want to limit the use of AI in schools just the way you want to limit calculators: ensure the student can do the math without calculators, even when the computation is hard and then teach them to use the calculator as a tool to help them move faster.
Restricting AI completely or introducing it too early, both would be harmful.
(Theses days) it's hard to know what you mean by this and whether you're being sarcastic.
No you don't give arithmetic students calculators for their exams, and you expect them to know how to do it without one.
Yes you probably give professionals who need to do arithmetic calculators so they can do it faster and with less errors.
Giving calculators to people who don't know how, why and/or when to use them will still get you bad results.
Giving calculators to someone who doesn't have any use for one is at best a waste of money and at worst a huge waste of time if the recipient becomes addicted to calculator games.
A calculator uses some type of finite precision arithmetic internally. If you run afoul of the limits of this arithmetic system, it may very confidently give you a wrong answer!
This is such a bad, uninformed and genuinely pitiful take that it's my duty to address it.
I live in Poland. There are currently a huge amount of hydrological problems all across the country and parts of it are desertifying. There are numerous articles and scientific journals about it:
So why is my country becoming a desert? And what if I don't want the scarce amount of water that's remaining in our rivers to be used by a water hungry datacenter? Is that unhinged?
you should educate yourself both at the ecological impact of data centers and the economics of running a water facility. it's just too simplistic to skim down 'water' used into a thing that turns it into rain and then it's captured to be used again. good luck with your next naive comment saying something like: rain is almost distilled water and treatment isn't required, so it can go into data centers directly
don't also forget people living nearby these facilities constantly facing drains due to the HIGH requirement of a server
Same argument people use against cows and almonds. The water is used and recycled. This is the weakest possible environmental argument you can possibly make. I’ll wait for the citizens to riot about their “wasted” water
If you are asking for human factors only then according to your link: "Decreases in the human population (such as from the massacres by Genghis Khan, the Black Death and the epidemics emerging in the Americas upon European contact)."
> Do you really believe all climate change in world history, which was dramatic and highly disruptive, was human caused?
It's nice that you can be so confidently wrong, just like an LLM. In reality, the climate changes we observe since the the 1800s is largely human caused.
'This human role in climate change is considered "unequivocal" and "incontrovertible". Nearly all actively publishing climate scientists say humans are causing climate change.'
So stop trying to push misinformation and educate yourself.
Wrong, it's changing much faster due to man-made greenhouse gasses. Make the effort to read through the science and facts I shared.
Here are some additional ones, from scientists (99.99% or 97%, depending on recency of studies, agree that man-made climate change is the leading driver of the global warming we are experiencing today):
"Scientists attribute the global warming trend observed since the mid-20th century to the human expansion of the "greenhouse effect"1 — warming that results when the atmosphere traps heat radiating from Earth toward space.
Life on Earth depends on energy coming from the Sun. About half the light energy reaching Earth's atmosphere passes through the air and clouds to the surface, where it is absorbed and radiated in the form of infrared heat. About 90% of this heat is then absorbed by greenhouse gases and re-radiated, slowing heat loss to space."
Hopefully that clarifies things.
> I wonder how humans caused that to end
It's all explained in the link(s) I shared. Educate yourself.
How do you think that works exactly? That data centers cause more rain than would otherwise fall? How is that not an ecological change? Where does it come from?
> Using ChatGPT to write an essay is a bit like using a forklift to lift weights. The forklift might do a perfectly good job of moving around some heavy iron plates, but you’d be wasting your time.
The point of writing essays (or doing any other school assessment) is not the completed product, it's the work (and hopefully learning) that went into it.
You can definitely use AI responsibly, but many students will not and do not.
I find this all-or-nothing attitude extraordinary. Chatbots are the best personal tutors you'll ever find and I tell students so. Do you need to understand Mitosis for Bio 101? Ask your favorite chatbot. Then ask what daughter cells are - a question you might be too afraid to ask in class because maybe it was covered yesterday you weren't listening. Then ask why there are no "son" cells - which you'd also be to afraid to ask about in class but you want to know.
You can ask every dumb question. You can ask for clarification on every term you don't understand. You can go off on tangents. You can ask the same thing again ten minutes later because you forgot already.
No teacher or tutor or peer is going to answer you with the same patience and the same depth and the same lack of judgement.
Is it good enough for a grad student working on their thesis? Maybe not. Is it good enough for a high school student. Almost certainly. Does it give this high school student a way to better _really_ understand biology because they can keep asking questions until they start to understand the answers. I think absolutely.
I'm not the biggest fan of AI for everything but you couldn't create something more of a dagger to the heart of the current education system. If you are in the U.S., carefully watch for the D party to turn on AI in their messaging and you'll be witness to the strong influence that teachers unions have on them. Disagree me all you want, but keep your eyes open, I guarantee you'll see it soon.
Interesting thought but my impression is that the democrats are much more beholden to other forces at play in the school system. I have friends who are teachers in the public school system, have been active in the union, and are indeed against AI in the classroom (although they're hardly rabid or unreasonable about it). On the other hand, the school administrators and IT departments are much more aggressive about pushing AI on them and pressing them to work it into the classroom somehow. Considering that the democrats are largely captured by corporate interest, and considering that tech/AI is one of the biggest corporate interests there is right now... I just don't see things playing out the way you predict.
Hey, I'm sorry, but such a blanket statement is pretty weak on its own. I'm interested in your perspective. Can you provide some concrete details that support your point? Because the people I know feel like AI in the classroom is inevitable and that they don't have much power in the face of the authority that wants to impose it on them, which would seem to contradict what you're saying.
There is no ethical generative AI. Meaning fully permissioned datasets, end-to-end. Not yet scientifically possible. So 100%, everyone who claims this, is lying, usually by omission, and some BS startup isn't going to invent this.
In my open letter, I wouldn't say "ethical" or "environmental" or any of these intersectional things because you're giving space for lies.
People want ethical AI even if it's impossible. So we get aspirationally ethical AI. Meaning, people really want to use generative AI, it makes life so easy, and people also want it to be ethical, because they don't want to make others upset, so they will buy into a memetic story that it is "ethical." Even if that story isn't true.
Aspirationally ethics already got hundreds of millions of dollars in funding. Like look at generative AI in the media industry. Moonvalley - "FULLY LICENSED, COMMERCIAL SAFE" (https://www.moonvalley.com) - and yet, what content was their text encoder trained on? Not "fully licensed," no not at all. Does everything else they make work without a text encoder? No. So... But people really want to believe in this. And it's led by DeepMind people! Adobe has the same problem. Some efforts are extremely well meaning. But everyone claiming expressly licensed / permissioned datasets is telling a lie by omission.
It's not possible to have only permissioned data. Anthropic and OpenAI concede, there's no technology without scraping. Listen, they're telling the truth.
Should we teach our kids to outsource their thinking to those genai services where the big clouds control the gate? It would be less of an issue if local genai with comparable capability is more accessible to general public.
AI is turning into a cult that's dividing us into those who support it and those who reject it. Arguments on both sides are flimsy, as no one really understands what it is. People see it as a black-box magic crystal.
> Further, GenAI adoption in industry is overwhelmingly aimed at automating and replacing human effort, often with the expectation that future “AGI” will render human intellectual and creative labor obsolete. This is a narrative we will not participate in
When every learner gets the high quality support and tutoring they need, all around the world, then we can talk about what you're unwilling to participate in. Until then, may every learner get a fantastic tutor via GenAI.
Previously: An open letter from educators who refuse the call to adopt [printed books, ballpoint pens, calculators, computers, the internet] in education
There's a big difference between "Here's this tool that helps you think" (ie calculator or pen) and "Here's this tool that does the thinking for you". And before you say that AI can fall under the first option, plenty of schoolchildren will take the easy way out and not use it responsibly.
> Current GenAI technologies represent unacceptable legal, ethical and environmental harms, including exploitative labour, piracy of countless creators' and artists' work, harmful biases, mass production of misinformation, and reversal of the global emissions reduction trajectory.
It's really annoying that political stuff always pollutes things. I largely agree with the position about GenAI being bad for education, but that position is not strengthened by tacking on a bunch of political drivel.
Whether you agree or disagree, I am happy to see a community putting out (in writing even) their problems with AI as it exists.
To the degree it is possible I would like to think the AI community would try to address their issues.
I understand that some of the items in their open letter show a complete incompatibility with AI — period. But misinformation, harmful biases, energy resource use should be things we all want to improve.
I don't think resource use is any business of teachers to be honest.
The problem with AI currently is that the students have figured out how to use it to cheat, but the teachers haven't figured out how to use it to teach.
AI is here, we need to figure out how to use it effectively and responsibly. Schools should be leading on this, instead of putting their heads in the sand and hoping it goes away.
> At its heart, education is a project of guiding learners to exercise their own agency in the world. Through education, learners should be empowered to participate meaningfully in society, industry, and the planet.
I agree but I have never seen an education system that had this as a goal. It's also not what society or employers actually want. What is desired are drones / automatons that take orders and never push back. Teaching people about agency is the opposite of that.
We are so stuck in a 19th century factory mindset everywhere, GenAI is just making it even more obvious.
Employers want a high-agency leadership class and drones for the individual contributors.
There are systems that nurture agency and leadership. They are the private schools and the Ivy League universities. And many great companies.
Most people don't want to be leaders and be judged based on impact. They want to be judged based on effort. They followed all the rules when writing their essay and should get an A+ even though their essay is unconvincing. If they get a bad mark, their response is to create a petition instead of fixing the problems.
Maybe we should attack our culture of busywork and stop blaming educators for failing to nurture agency.
the education i received in germany did have this goal. the teachers had this goal, and i have the impression that the teachers and schools my kids go to have this goal as well. i can't say how universal that is, but it the opposite is not universal either.
the problem is that the goals are not effectively implemented. maybe it's more a dream than a goal, because the teachers and schools don't know how to actually reach that goal.
meaningful participation in society is often reduced to the ability to get a job by those outside of school, so you are right about employers. at least the large ones. unfortunately that works against them, because the current generation of juniors doesn't even want to learn anything. they are drones that just want to get paid, but are not motivated to learn what they need to do their job better.
Just yesterday, I talked to a neighbor who has two kids attending a local school in Mitte. He told me that the children are constantly indoctrinated into group conformity, obedience to authority, and fear of "wrong-think," with a good splash of wokie-talkie on top of it. To me, that sounds like a complete erasure of agency. Schools must provide knowledge, not override the nurture given by parents.
I have personally observed how locals are bullied by overseas guests and choose a delusional escape into virtue signaling rather than defending themselves. I consider German upbringing to be that of a defeated people.
I consider German upbringing to be that of a defeated people
i don't know what you are trying to imply here. how should the feeling of defeat affect the upbringing? (i mean,i am sure there would be an effect, but how would that look like?)
what i can tell you is that the sentiment i experienced was not defeat. after all this is neither our, nor our parents, (and for the current generation also not their grandparents) experience. the feeling we were taught was that of embarrassment, of how could we let that happen and consequently the need to understand how we can avoid that from ever happening again. except for a minority or right wing sympathizers that we keep a close eye on.
I think that the Allied victors laid the foundation of the current German education system on initial denazification and subsequent extreme pacification, to such a degree of impotence that people refuse to defend themselves even when they are fully capable of neutralizing a criminal, preferring to become victims rather than use force.
i don't have this feeling at all. on what experience do you base that on?
I’ve seen multiple instances of robberies where the attacker was a head shorter and could have been easily stunned, or worse, with a single hit, yet people gave away their valuables because even the thought of using violence is taboo. Of course, the police always say, “Just file a complaint,” which never results in anything. It’s not a joke: even if violence is used purely to stop a criminal, the police will prosecute you, lol. I’m not American, but I like the idea that one could defend themselves and their property using all means necessary.
While you are likely correct about systems, I have known quite a few individual educators who have the goal of helping their fellow humans learn about their agency in the world.
I attended a public school system which, while at times did falter in various ways, did a fairly good job meeting its stated mission that was more or less exactly that.
I witnessed far more personal political pressure and cajoling than corporate/future employer. Where I went to school the pressure on schools was usually from parents, students, and local groups concerned with civil matters. I had (until recently) indirect (and sometimes direct) exposure to this because one of parents was an educator and a senior member of their department in an adjoining district to the one I attended.
Where I went to college, it was always very clear to me what was shaped by industry vs. research and academia. I went to a research university for an uncommon hard-science degree and so there was a lot of employer interest, but the university cleverly drew a paywall around that and businesses had to pay the university to conduct research (or agree to profit sharing, patent licensing, etc). There was a clear, bright line separating corporate/employer interest from the classroom.
One of my favorite essays on a similar topic: https://sites.math.washington.edu//~koblitz/mi.html
Neal Koblitz's "The Case Against Computers in Math Education".
Wow. Now there's a quote:
> "Youngsters who are immersed in this popular culture are accustomed to large doses of passive, visual entertainment. They tend to develop a short attention span, and expect immediate gratification. They are usually ill equipped to study mathematics, because they lack patience, self-discipline, the ability to concentrate for long periods, and reading comprehension and writing skills."
For context, the essay is from 1996. You could have told me this is from the current year and I would have believed you.
> You could have told me this is from the current year and I would have believed you.
Agreed. It's a matter of degree, and I wonder what reaching the eventual limit (if there is one) looks like.
There’s a platonic dialogue that has basically the same sentiment.
People see what they want to see, even very smart people.
Ultimately those are tools and I think the goal is to educate students to use them properly. Also because I don't expect the knowledge paradox to disappear anytime soon with these models.
The cat is out of the bag. Kids will use AI to write papers, learn topics, cheat on take-home tests, etc. Only a completely closed-off environment with no access to the internet could prevent this.
The best option is to change the incentives. 95% of kids treat school as a necessary hurdle to enter the gentry white-collar class. Either make the incentives personal enrichment instead of letter grades or continue to give students every incentive to use AI at every opportunity.
> Only a completely closed-off environment with no access to the internet could prevent this.
Even then, an LLM running locally could still operate.
> Only a completely closed-off environment with no access to the internet could prevent this.
Okay, then we should do this.
> Either make the incentives personal enrichment instead of letter grades
This just straight up does not work.
The incentive for not being obese is perhaps the most perfect incentive ever: you live a happier life, with a greater quality of life, for longer, with less societal friction. It's the perfect poster child of "personal enrichment".
And yet, obesity is not declining. How is this possible?
Because internal locus of control as a "solution" for systemic issues just does not work. It doesn't maybe work, it doesn't sometimes work, it never works. If you don't address institutional issues and physiological issues then you're never going to find a solution.
What I mean is, kids use AI because it's easy. It's human nature to take the path of least resistance. This has a physiological, a biological, component to it. If we're just going to be waiting around for the day people aren't lazy then we're all gonna die.
Schools are artificial environments by design. They're controlled environments by design. If we leave children to their own devices, they grow up stupid.
The problem is that education is a cumulative endeavor. We don't give calculators to kindergartners trying to learn the number line. Why not? Because if you don't have the neural connections to intuitively, and quickly, understand the number line, then Algebra is going to be a nightmare.
AI can enhance learning, if and only if the prerequisites are satisfied. If you use AI to write but you don't know how to write, then you're going to progress on and struggle much more than you should. We carefully and deliberately introduce tools to children. Here's your graphing calculator... in Algebra I, after you've already graphed on paper hundreds of times. You already understand graphing, great, now you're allowed to speed it up.
We, as adults, are very far removed from this. We have an attitude of "what's the problem" because we already have built those neural connections. It's a sort of Lord Farquad "some of you may die, but that's a risk I'm willing to take" approach, but we don't even realize we do it.
> GenAI is a threat to student learning and wellbeing.
This blanket dismissal is not going to age well, and reads like a profession lashing out.
With the right system prompt, AI can be a patient, understanding, encouraging, non-judgemental tutor that adapts and moves at the student's pace. Most students can not afford that type of human tutor, but an AI one could be free or very affordable.
"How AI Could Save (Not Destroy) Education" (https://www.youtube.com/watch?v=hJP5GqnTrNo) from Sal Khan of Khan Academy
> AI can be a patient, understanding, encouraging, non-judgemental tutor
Groan... no it can't. It can simulate all those things, but at the moment, "AI" can't be patient, understanding, and whether judgemental or non-judgemental.
OK it can be encouraging. "You're one good student, $STUDENT_NAME!" (1).
1) https://www.youtube.com/watch?v=jRPPdm09xZ8
I can say the exact same thing about you or anybody else. You can’t be patient, understanding, encouraging, non-judgmental tutor. You can only simulate it.
I really can’t understand why people don’t understand this. What am I missing?
Philosophical zombies are supposed to be a thought experiment to demonstrate that solipsism and nihilism are stupid, not a rhetorical device to equate human minds to linear algebra statistical parrots.
Geezus freaking christ.
Now is that a simulation of someone who thinks he's responding to a cretin... or actually the feelings of someone who thinks he's talking to a cretin?
[flagged]
This simulation is too realistic. I’d like to stop the game now please.
Whether the AI is patient, understanding, etc., is entirely up to the person interacting with it to decide. Just like they decide this when interacting with people. You can never know the internal state of the other in a conversation so it is up to you to model it and if modeling it is best done with human metaphors then use human metaphors.
Most student can not afford the expertise necessary to have AI patient etc.
I think the original phrase was made with the assumption "as it is right now".
I do share concerns of undersigned, even though don't necessarily agree with all statements in the letter.
If you are using the most commonly available AI and have an average ability of perfecting a search term, right now AI is not a particularly useful tool in learning anything. It is far too inaccurate to learn anything challenging. The key term here is could, and yes it is possible but there is nothing yet to say we shall get there.
My experience in higher education is that students use AI for one of two things:
1. To do the homework because they view classes and grades as a barrier to their future instead of preparation for such.
2. In place of a well crafted query in an academic database.
It's not a blanket dismissal, it's a fact in context. It should read like a profession lashing out - that's what it is.
AI has enormous upsides and enormous downsides. The "you're going to look so dumb in the future" dismissal is lazy. Inevitability does not make something purely beneficial.
It's a fallacious line of thinking that's disappointingly common in tech-minded people (frequently seen in partnership with implications that Luddites were bad or stupid, quotes from historical criticisms of computers/calculators, and other immature usage of metaphor).
I'd respect the statement more if it acknowledged that AI had some benefit, or potential benefit in the future, but they did not want to use it currently.
Maybe if we move from LLMS to real AI it will have benefits.
"You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so.” -- someone wise, or was he?
Makes sense. You also don't give calculators to students of arithmetic.
Sarcasm? We actually weren't allowed to take any kind of calculator into any of our advanced maths exams in University (and I'm talking just 15 years ago, not when they were newfangled things).
Can’t tell if you are serious but I will assume you are.
Why not? Seems like a logical conclusion.
1. Introduce the concept.
2. Demonstrate an intuitive algorithm.
3. Assist students as they practice and internalize the algorithm.
4. Reinforce this learning by encouraging them to teach each other.
5. Show them how to use tools by repeating this process with the tool as the concept.
You want to limit the use of AI in schools just the way you want to limit calculators: ensure the student can do the math without calculators, even when the computation is hard and then teach them to use the calculator as a tool to help them move faster.
Restricting AI completely or introducing it too early, both would be harmful.
I'm not really convinced. This sounds reasonable but I can't formulate a good argument in favor.
(Theses days) it's hard to know what you mean by this and whether you're being sarcastic.
No you don't give arithmetic students calculators for their exams, and you expect them to know how to do it without one.
Yes you probably give professionals who need to do arithmetic calculators so they can do it faster and with less errors.
Giving calculators to people who don't know how, why and/or when to use them will still get you bad results.
Giving calculators to someone who doesn't have any use for one is at best a waste of money and at worst a huge waste of time if the recipient becomes addicted to calculator games.
The person you're responding to has clearly used the word "student". What on earth are you on about?
I interpreted "students of arithmetic" as anyone that practices arithmetic - similar to "students of medicine", etc.
Seems like a reasonable expansion of the concept to me. Why the aggressive dismissal?
[flagged]
You can rely on the answer a calculator gives you. There's no danger that it will simply be confidently wrong.
A calculator uses some type of finite precision arithmetic internally. If you run afoul of the limits of this arithmetic system, it may very confidently give you a wrong answer!
Some calculators will confidently state incorrect answers to questions like:
(10^15 + 7.2 − 10^15) * 100
LLMs are notoriously bad at math but they’re not LMMs so that shouldn’t be surprising.
If you want an LLM to do math you just ask it to write a program with tests.
How about Pentium II floating point arithmetic?
1Kg of Beef takes as much Energy as 60.000 ChatGPT prompts, and as much water as 51.383.333 ChatGPT prompts.
Stop eating meat, driving your car, and taking hot showers. Then we can talk about prompt resources.
Why not all of it?
Cost of opportunity and diminishing returns.
It's stupid to optimise a 0.01% case when you can spend that time on a 25% case.
[flagged]
Water? I didn’t realize that when you use water it disappears from the universe, or even from earth, or even from the local ecosystem
Energy doesn't disappear, but obviously it moves from useful forms to unuseful forms. Same with water. Your sarcasm just comes off as naive arrogance.
The water is gone? Where did it go?
Converted to steam and carried out of the local ecosystem by wind. From the perspective of anyone downstream the water is gone.
Really? All water that goes into the air is gone forever? And then the wind blows it away? Incredible
Why are you being so antagonistic? Are you ok?
> Water? I didn’t realize that when you use water it disappears from the universe, or even from earth, or even from the local ecosystem
Your own standard includes the local ecosystem.
[flagged]
This is such a bad, uninformed and genuinely pitiful take that it's my duty to address it.
I live in Poland. There are currently a huge amount of hydrological problems all across the country and parts of it are desertifying. There are numerous articles and scientific journals about it:
https://www.agroberichtenbuitenland.nl/actueel/nieuws/2024/0...
So why is my country becoming a desert? And what if I don't want the scarce amount of water that's remaining in our rivers to be used by a water hungry datacenter? Is that unhinged?
you should educate yourself both at the ecological impact of data centers and the economics of running a water facility. it's just too simplistic to skim down 'water' used into a thing that turns it into rain and then it's captured to be used again. good luck with your next naive comment saying something like: rain is almost distilled water and treatment isn't required, so it can go into data centers directly
don't also forget people living nearby these facilities constantly facing drains due to the HIGH requirement of a server
some read: https://thereader.mitpress.mit.edu/the-staggering-ecological...
https://aucgroup.net/water-treatment-plant-costs/
Same argument people use against cows and almonds. The water is used and recycled. This is the weakest possible environmental argument you can possibly make. I’ll wait for the citizens to riot about their “wasted” water
If we can freely recycle water how little of it would we need on Earth before we would see an ecological change?
Can you explain why Mesopotamia was once an agricultural Mecca but is now an arid desert?
Or what human event caused the little ice age? https://en.m.wikipedia.org/wiki/Little_Ice_Age
Do you really believe all climate change in world history, which was dramatic and highly disruptive, was human caused?
> Can you explain why Mesopotamia was once an agricultural Mecca but is now an arid desert?
I'm not familiar, so no, not off the top of my head. Do you?
> Or what human event caused the little ice age? https://en.m.wikipedia.org/wiki/Little_Ice_Age
If you are asking for human factors only then according to your link: "Decreases in the human population (such as from the massacres by Genghis Khan, the Black Death and the epidemics emerging in the Americas upon European contact)."
> Do you really believe all climate change in world history, which was dramatic and highly disruptive, was human caused?
I said nothing of the sort.
The climate changed because it’s always changing. Humans adapted, but they didn’t cause it
It's nice that you can be so confidently wrong, just like an LLM. In reality, the climate changes we observe since the the 1800s is largely human caused.
It's not an opinion, it's a fact. https://en.wikipedia.org/wiki/Scientific_consensus_on_climat...
Read it out:
'This human role in climate change is considered "unequivocal" and "incontrovertible". Nearly all actively publishing climate scientists say humans are causing climate change.'
So stop trying to push misinformation and educate yourself.
The climate is changing just as it always has. We used to be in an ice world, I wonder how humans caused that to end
> The climate is changing just as it always has
Wrong, it's changing much faster due to man-made greenhouse gasses. Make the effort to read through the science and facts I shared.
Here are some additional ones, from scientists (99.99% or 97%, depending on recency of studies, agree that man-made climate change is the leading driver of the global warming we are experiencing today):
- https://science.nasa.gov/climate-change/causes/
- https://scienceexchange.caltech.edu/topics/sustainability/ev...
"Scientists attribute the global warming trend observed since the mid-20th century to the human expansion of the "greenhouse effect"1 — warming that results when the atmosphere traps heat radiating from Earth toward space.
Life on Earth depends on energy coming from the Sun. About half the light energy reaching Earth's atmosphere passes through the air and clouds to the surface, where it is absorbed and radiated in the form of infrared heat. About 90% of this heat is then absorbed by greenhouse gases and re-radiated, slowing heat loss to space."
Hopefully that clarifies things.
> I wonder how humans caused that to end
It's all explained in the link(s) I shared. Educate yourself.
How do you think that works exactly? That data centers cause more rain than would otherwise fall? How is that not an ecological change? Where does it come from?
One of my favourite quotes on this topic:
> Using ChatGPT to write an essay is a bit like using a forklift to lift weights. The forklift might do a perfectly good job of moving around some heavy iron plates, but you’d be wasting your time.
The point of writing essays (or doing any other school assessment) is not the completed product, it's the work (and hopefully learning) that went into it.
You can definitely use AI responsibly, but many students will not and do not.
I find this all-or-nothing attitude extraordinary. Chatbots are the best personal tutors you'll ever find and I tell students so. Do you need to understand Mitosis for Bio 101? Ask your favorite chatbot. Then ask what daughter cells are - a question you might be too afraid to ask in class because maybe it was covered yesterday you weren't listening. Then ask why there are no "son" cells - which you'd also be to afraid to ask about in class but you want to know.
You can ask every dumb question. You can ask for clarification on every term you don't understand. You can go off on tangents. You can ask the same thing again ten minutes later because you forgot already.
No teacher or tutor or peer is going to answer you with the same patience and the same depth and the same lack of judgement.
Is it good enough for a grad student working on their thesis? Maybe not. Is it good enough for a high school student. Almost certainly. Does it give this high school student a way to better _really_ understand biology because they can keep asking questions until they start to understand the answers. I think absolutely.
I'm not the biggest fan of AI for everything but you couldn't create something more of a dagger to the heart of the current education system. If you are in the U.S., carefully watch for the D party to turn on AI in their messaging and you'll be witness to the strong influence that teachers unions have on them. Disagree me all you want, but keep your eyes open, I guarantee you'll see it soon.
Interesting thought but my impression is that the democrats are much more beholden to other forces at play in the school system. I have friends who are teachers in the public school system, have been active in the union, and are indeed against AI in the classroom (although they're hardly rabid or unreasonable about it). On the other hand, the school administrators and IT departments are much more aggressive about pushing AI on them and pressing them to work it into the classroom somehow. Considering that the democrats are largely captured by corporate interest, and considering that tech/AI is one of the biggest corporate interests there is right now... I just don't see things playing out the way you predict.
the administrators and IT departments are not in the teachers unions.
Yes... exactly my point.
and the teachers unions have vastly more power than those guys
Hey, I'm sorry, but such a blanket statement is pretty weak on its own. I'm interested in your perspective. Can you provide some concrete details that support your point? Because the people I know feel like AI in the classroom is inevitable and that they don't have much power in the face of the authority that wants to impose it on them, which would seem to contradict what you're saying.
Every teacher I talked to has said the influence of AI has been negative. Why wouldn't they fight to remove it from the classroom?
They are talking about cheating with it, not replacing teaching with it.
There is no ethical generative AI. Meaning fully permissioned datasets, end-to-end. Not yet scientifically possible. So 100%, everyone who claims this, is lying, usually by omission, and some BS startup isn't going to invent this.
In my open letter, I wouldn't say "ethical" or "environmental" or any of these intersectional things because you're giving space for lies.
People want ethical AI even if it's impossible. So we get aspirationally ethical AI. Meaning, people really want to use generative AI, it makes life so easy, and people also want it to be ethical, because they don't want to make others upset, so they will buy into a memetic story that it is "ethical." Even if that story isn't true.
Aspirationally ethics already got hundreds of millions of dollars in funding. Like look at generative AI in the media industry. Moonvalley - "FULLY LICENSED, COMMERCIAL SAFE" (https://www.moonvalley.com) - and yet, what content was their text encoder trained on? Not "fully licensed," no not at all. Does everything else they make work without a text encoder? No. So... But people really want to believe in this. And it's led by DeepMind people! Adobe has the same problem. Some efforts are extremely well meaning. But everyone claiming expressly licensed / permissioned datasets is telling a lie by omission.
It's not possible to have only permissioned data. Anthropic and OpenAI concede, there's no technology without scraping. Listen, they're telling the truth.
I loathe this entire line of "ethical" moral grandstanding.
AI should be trained on all data that is available. For a significant part of the dataset, it's the most useful that data has ever been.
Should we teach our kids to outsource their thinking to those genai services where the big clouds control the gate? It would be less of an issue if local genai with comparable capability is more accessible to general public.
AI is turning into a cult that's dividing us into those who support it and those who reject it. Arguments on both sides are flimsy, as no one really understands what it is. People see it as a black-box magic crystal.
> Further, GenAI adoption in industry is overwhelmingly aimed at automating and replacing human effort, often with the expectation that future “AGI” will render human intellectual and creative labor obsolete. This is a narrative we will not participate in
When every learner gets the high quality support and tutoring they need, all around the world, then we can talk about what you're unwilling to participate in. Until then, may every learner get a fantastic tutor via GenAI.
Just in: new tariffs have been announced on educators. Not sure on whom, but there it is.
Also,
>global community
As long as global means rich. 0 signatories from China, India, Russia, Pakistan, Bangladesh, Indonesia, Africa.
do you think those have access to computers with AI for their education?
Yes.
Previously: An open letter from educators who refuse the call to adopt [printed books, ballpoint pens, calculators, computers, the internet] in education
There's a big difference between "Here's this tool that helps you think" (ie calculator or pen) and "Here's this tool that does the thinking for you". And before you say that AI can fall under the first option, plenty of schoolchildren will take the easy way out and not use it responsibly.
> Current GenAI technologies represent unacceptable legal, ethical and environmental harms, including exploitative labour, piracy of countless creators' and artists' work, harmful biases, mass production of misinformation, and reversal of the global emissions reduction trajectory.
It's really annoying that political stuff always pollutes things. I largely agree with the position about GenAI being bad for education, but that position is not strengthened by tacking on a bunch of political drivel.
Whether you agree or disagree, I am happy to see a community putting out (in writing even) their problems with AI as it exists.
To the degree it is possible I would like to think the AI community would try to address their issues.
I understand that some of the items in their open letter show a complete incompatibility with AI — period. But misinformation, harmful biases, energy resource use should be things we all want to improve.
I don't think resource use is any business of teachers to be honest.
The problem with AI currently is that the students have figured out how to use it to cheat, but the teachers haven't figured out how to use it to teach.
AI is here, we need to figure out how to use it effectively and responsibly. Schools should be leading on this, instead of putting their heads in the sand and hoping it goes away.
This is absurd. I’ve learned so much from having an LLM tutor me as a I go through a dense book, for example.
[dead]