When OpenAI released its chatbot ChatGPT last year, proponents were quick to announce the death of various writing-related fields, such as screenwriting, computer programming, and music composition. One particular field stood out as a sector that would feel the power of ChatGPT almost immediately: education. With ChatGPT’s technology, students can now easily cheat on papers and college admissions essays, while on the opposite end, teachers can outsource their curriculums to AI—and no one would be the wiser.
But ChatGPT is hardly the end of education. Just as quickly as students started passing off the chatbot’s work as their own, new programs popped up to detect AI-written work, and teachers, looking to get ahead of their students, started integrating ChatGPT responses into their lesson planning.
The truth is, if leveraged well, AI has the potential to greatly enhance students’ abilities to think critically and expand their soft skills. And for skeptics who are worried kids will stop learning basic skills, avoid practicing, and forget general facts if they can rely on an AI to answer for them, psychologists Edward Deci and Richard Ryan posit in their self-determination theory that humans are intrinsically driven by autonomy, relatedness, and competence—that is, they will continue to learn regardless of any shortcuts thrown their way. The creation of Wikipedia is a great example. We didn’t stop learning history or science just because we could now quickly look up dates and formulas online. Instead, we simply gained an additional resource to help us fact-check and facilitate learning.
Seeing as education is one of AI’s first consumer use cases, and programs like ChatGPT are how millions of kids, teachers, and administrators will be introduced to AI, it is critical that we pay attention to the applications of AI and its implications for our lives. Below, we explore five predictions for AI and the future of learning, knowledge, and education.
Getting one-on-one support for services like tutoring, coaching, mentorship, and even therapy was once only available to the well-off. AI will help democratize these services for wider audiences. In fact, Bloom’s 2 sigma problem—which found that students who received one-on-one teaching performed two standard deviations better than children in a traditional classroom—has a solution now. AI can potentially act as a live tutor for anyone, with humans supplementing the AI to provide in-depth knowledge and emotional and behavioral support. Academic tool Numerade, for example, recently released an AI tutor, Ace, that can generate personalized study plans, curating the right content depending on students’ skill levels.
AI can also put time-constrained experts and academic celebrities within reach for all learners, regardless of resources. This development is incredibly democratizing for professions where mentorship and apprenticeship are important. Imagine if an early stage startup founder could chat with an AI version of Marc Andreessen or Paul Graham on demand! Well, that’s just what the startup Delphi is trying to do. Historical Figures, meanwhile, lets users converse with important historical figures like Abraham Lincoln, Plato, and Benjamin Franklin, while Character AI lets anyone create “characters,” real or imaginary, to have conversations with.
In fields that can be stigmatized like mental health, AI-augmented solutions (such as Replika or Link)—in addition to being less expensive and always available for an appointment—may be more approachable than a human therapist, encouraging patients who are afraid of a stranger’s judgment. AI can also personalize and adapt to your stylistic preferences (i.e., do you prefer cognitive behavioral therapy or more traditional behavioral therapy) instantly, solving the known problem of difficult discovery and matching in the therapy industry. AI-augmented therapy is also software that has low marginal costs. This means more affordable end products can be created, which will enable mass market access. Not that we’re envisioning a world where humans have no role. At the present moment, AI isn’t perfect, and it doesn’t get to 100% of human-level thoughtfulness and expertise (yet). Also, there are times and people who may simply want an IRL human to engage with them.
With AI, it will become possible to personalize everything from learning modalities and needs (e.g., visual versus text versus audio) to content types (e.g., easily bring in a kid or adult’s favorite character or favorite hobby / genre) to curriculum. It will also be possible to teach one’s skill level and gaps more precisely: software can track your knowledge, test your progress, and repeat or reformat customized content for you based on your knowledge and gaps. This should lead to higher engagement. Cameo, for example, launched a kids product featuring Blippi, Spider-Man, and other top intellectual property. A mom even asked “Spider-Man” to encourage her kid’s bathroom training, and it seems to have worked! AI will also better address different types of learners—from those who are more advanced, to kids that are falling behind on a specific concept or subject, to students who are shy about raising their hand in a classroom, to those with special learning needs.
Historically, students and educators are natural trendsetters when it comes to productivity software. In fact, students and teachers were among the first users of startups like Canva and Qualtrics (which was later acquired by SAP). In Canva’s case, students at the University of Western Australia (where the founders attended college) picked up the design platform to produce their school yearbooks, while for Qualtrics, Northwestern marketing professor Angela Lee started using the service to easily collect data at scale for her MBA and doctoral students. Just as students and teachers took to early productivity tools, we can easily see them becoming part of a new generation of early adopters for software that takes advantage of chat-based conversational interfaces, as AI continues to become more “human-like” through improved intelligence.
Another reason we expect teachers to embrace next-gen AI tools is that they—especially those from public institutions—are overworked and underfunded, leaving them with less time to focus on where they’d prefer to focus their time: their students. Today, teachers spend a significant amount of time grading, creating lesson plans, and preparing for their classes. AI, having learned from millions of earlier educational materials, can reduce teachers’ workloads by, among other things, creating drafts of their plans and syllabi. Then, all the teachers need to do is refine and tailor the output for their respective classrooms. By freeing up their time, teachers can now focus on previously “bonus” activities like giving individual students personalized attention.
As for students, they love finding creative ways to save time and gain advantages in their work. Chegg was the previous generation’s darling. Now, new AI-driven resources, such as Photomath and Numerade, have popped up and are helping students solve and understand complex math and science problems. Colleges in particular are dense environments and a popular product can quickly gather word of mouth through student organizations, social clubs/events, or even professors that use them in classes with hundreds of students.
Since the release of ChatGPT, public educators have begun debating how and whether they should “police” schoolwork, college admissions, and beyond for evidence of AI-assisted work. Schools around the world, including in New York, Seattle, and other large public school districts, have banned ChatGPT and other related AI-writing sites for now. Even the process of continuing to use college admissions essays has been called into question.
At the same time, many educators argue that ChatGPT is a technology that should be integrated with learning and teaching, and that leveraging AI will be a crucial career skill in the future. To realize this, we’ll need to make a series of adjustments in the classroom and in how we assess classroom achievement—and make adjustments, just like we did when Wikipedia, calculators, the internet, personal laptops, and more came onto the scene and eventually became pivotal classroom technologies. We’re excited to see the emergence of both next generation tools that can help schools better assess student learning outcomes and award credentials, and AI-leveraging tools that can make teachers and students’ lives better and easier.
One complication that will need to be considered is how access to this technology could give certain students big advantages in learning and in output. For example, in schools that ban access to AI tools, students who lack internet access at home may not get any exposure to AI technology, while students with resources can learn about it and use it at home. This will also widen the gap between public and private school education, as it will be easier for private schools than public schools to adopt and incorporate new tech given their lower student-to-teacher ratios and higher budgets.
Another big area of concern is “truth” in the age of AI. Algorithms are trained on available data, but all this data is currently still subject to human judgment and human behaviors. This means that societal biases of all kinds—racial, gender-based, and more—get baked into the algorithms and these biases will continue to be amplified. For example, Gmail’s sentence completion AI assumes an investor must be male. Google’s Smart Compose team has made several attempts to correct the problem, but have thus far been unsuccessful.
In this bias-filled environment, where AI provides factually incorrect information (or fake facts/news), fact checking will become critical. Today’s AI-generated responses are especially dangerous because they can easily compose coherent prose and its level of polish can fool us into believing it to be factually accurate and true. As an example, a University of Washington study profiled in the WSJ shows 72% of people reading an AI composed news article thought it was credible, despite its facts being incorrect.
How do we curate high-quality and factually accurate content in an era where there will be a firehose of it being created by anyone and everyone, and robots? Trust in user-generated content and other non-branded outlets will degrade. On the flip side, audiences may also have blind trust in personalities, brands, and “experts” they already follow and respect.
Lastly, we may create a generation of people who have competence without comprehension of underlying details. This could end up causing problems in edge cases and crises when detailed knowledge of underlying details become important. Take the abstraction of web development: we’ve gone further and further away from low-level hardware, infrastructure, and backend to a world with GitHub Copilot and one where frontend engineers barely need to touch databases or backends. There are even no-code solutions for non-technical users. This abstraction is great because it enables more creation and empowers users with fewer skills level users. But what happens when there’s a critical bug in the backend and no one understands how to fix it?
We are excited about all the ways AI will change learning, knowledge, education, personal development, and self improvement. If you are building in these categories, reach out to me at askates@a16z.com!