MIT Technology Review

4 readers
0 users here now

Founded at the Massachusetts Institute of Technology in 1899, MIT Technology Review is a world-renowned, independent media company whose insight, analysis, reviews, interviews and live events explain the newest technologies and their commercial, social and political impacts.

Don't post archive.is links or full text of articles, you will receive a temp ban.

founded 1 week ago
MODERATORS
1
 
 

School’s out and it’s high summer, but a bunch of teachers are plotting how they’re going to use AI this upcoming school year. God help them.

On July 8, OpenAI, Microsoft, and Anthropic announced a $23 million partnership with one of the largest teachers’ unions in the United States to bring more AI into K–12 classrooms. Called the National Academy for AI Instruction, the initiative will train teachers at a New York City headquarters on how to use AI both for teaching and for tasks like planning lessons and writing reports, starting this fall

The companies could face an uphill battle. Right now, most of the public perceives AI’s use in the classroom as nothing short of ruinous—a surefire way to dampen critical thinking and hasten the decline of our collective attention span (a viral story from New York magazine, for example, described how easy it now is to coast through college thanks to constant access to ChatGPT).

Amid that onslaught, AI companies insist that AI promises more individualized learning, faster and more creative lesson planning, and quicker grading. The companies sponsoring this initiative are, of course, not doing it out of the goodness of their hearts.

No—as they hunt for profits, their goal is to make users out of teachers and students. Anthropic is pitching its AI models to universities, and OpenAI offers free courses for teachers. In an initial training session for teachers by the new National Academy for AI Instruction, representatives from Microsoft showed teachers how to use the company’s AI tools for lesson planning and emails, according to the New York Times.

It’s early days, but what does the evidence actually say about whether AI is helping or hurting students? There’s at least some data to support the case made by tech companies: A recent survey of 1,500 teens conducted by Harvard’s Graduate School of Education showed that kids are using AI to brainstorm and answer questions they’re afraid to ask in the classroom. Studies examining settings ranging from math classes in Nigeria to colleges physics courses at Harvard have suggested that AI tutors can lead students to become more engaged.

And yet there’s more to the story. The same Harvard survey revealed that kids are also frequently using AI for cheating and shortcuts. And an oft-cited paper from Microsoft found that relying on AI can reduce critical thinking. Not to mention the fact that “hallucinations” of incorrect information are an inevitable part of how large language models work.

There’s a lack of clear evidence that AI can be a net benefit for students, and it’s hard to trust that the AI companies funding this initiative will give honest advice on when notto use AI in the classroom.

Despite the fanfare around the academy’s launch, and the fact the first teacher training is scheduled to take place in just a few months, OpenAI and Anthropic told me they couldn’t share any specifics.

It’s not as if teachers themselves aren’t already grappling with how to approach AI. One such teacher, Christopher Harris, who leads a library system covering 22 rural school districts in New York, has created a curriculum aimed at AI literacy. Topics range from privacy when using smart speakers (a lesson for second graders) to misinformation and deepfakes (instruction for high schoolers). I asked him what he’d like to see in the curriculum used by the new National Academy for AI Instruction.

“The real outcome should be teachers that are confident enough in their understanding of how AI works and how it can be used as a tool that they can teach students about the technology as well,” he says. The thing to avoid would be overfocusing on tools and pre-built prompts that teachers are instructed to use without knowing how they work.

But all this will be for naught without an adjustment to how schools evaluate students in the age of AI, Harris says: “The bigger issue will be shifting the fundamental approaches to how we assign and assess student work in the face of AI cheating.”

The new initiative is led by the American Federation of Teachers, which represents 1.8 million members, as well as the United Federation of Teachers, which represents 200,000 members in New York. If they win over these groups, the tech companies will have significant influence over how millions of teachers learn about AI. But some educators are resisting the use of AI entirely, including several hundred who signed an open letter last week.

Helen Choi is one of them. “I think it is incumbent upon educators to scrutinize the tools that they use in the classroom to look past hype,” says Choi, an associate professor at the University of Southern California, where she teaches writing. “Until we know that something is useful, safe, and ethical, we have a duty to resist mass adoption of tools like large language models that are not designed by educators with education in mind.”

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.


From MIT Technology Review via this RSS feed

2
 
 

This is today’s edition of The Download,our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

California is set to become the first US state to manage power outages with AI

California’s statewide power grid operator is poised to become the first in North America to deploy artificial intelligence to manage outages, MIT Technology Review has learned.

At an industry summit in Minneapolis tomorrow, the California Independent System Operator is set to announce a deal to run a pilot program using new AI software called Genie, from the energy-services giant OATI.

The software uses generative AI to analyze and carry out real-time analyses for grid operators and comes with the potential to autonomously make decisions about key functions on the grid, a switch that might resemble going from uniformed traffic officers to sensor-equipped stoplights. Read the full story.

—Alexander C. Kaufman

Why it’s so hard to make welfare AI fair

There are plenty of stories about AI that’s caused harm when deployed in sensitive situations, and in many of those cases, the systems were developed without much concern to what it meant to be fair or how to implement fairness.But the city of Amsterdam did spend a lot of time and money to try to create ethical AI—in fact, it followed every recommendation in the responsible AI playbook. But when it deployed it in the real world, it still couldn’t remove biases. So why did Amsterdam fail? And more importantly: Can this ever be done right?Join our editor Amanda Silverman, investigative reporter Eileen Guo and Gabriel Geiger, an investigative reporter from Lighthouse Reports, for a subscriber-only Roundtables conversation at 1pm ET on Wednesday July 30 to explore if algorithms can ever be fair. Register here!

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Trump’s ‘big, beautiful bill’ is already hurting sick childrenAnd hundreds of hospitals are likely to close, too. (New Yorker $)+ His administration is going after easy targets, which includes sick children. (Salon $)

2 The US overseas worker purge is hitting Amazon hardIts warehouse employees are losing their right to work in the US. (NYT $)+ The US State Department has fired more than 1,350 workers so far. (Reuters)

3 Nvidia’s CEO claims China’s military probably won’t use its AI chipsBut then he would say that, wouldn’t he. (Bloomberg $)+ Even after the Trump administration has eased chip software tool export restrictions. (FT $)+ Rival Huawei is planning a major AI chip overhaul. (The Information $)

4 Scientists are reportedly hiding LLM instructions in their papers Instructing models to give their work positive peer reviews. (The Guardian)

5 Amazon is dragging its heels launching its web version of AlexaIt appears the company underestimated just how much work they had to do. (WP $)

6 SpaceX’s revenue is on the upAs Tesla continues to struggle. (WSJ $)+ Musk is not in favor of merging Tesla with xAI. (Reuters)+ Trump is still planning to slash NASA’s budget. (The Atlantic $)+ Rivals are rising to challenge the dominance of SpaceX. (MIT Technology Review)

7 The Air India crash was caused by a cut in the plane’s fuel supplyCockpit voice recordings reveal that one pilot asked another why he’d cut off the supply. (CNN)

8 The UK’s attempt to ape DOGE isn’t going wellCouncils are already blocking Reform UK’s attempts to access sensitive data. (FT $)+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review)

9 Even crypto executives can fall for crypto scamsJust ask the top brass from MoonPay, which lost $250,000 worth of Ethereum. (The Verge)+ The people using humour to troll their spam texts. (MIT Technology Review)

**10 Why landline phones refuse to die 📞**The business world still loves them. (WSJ $)

Quote of the day

“We don’t like to work like that. I’m a Buddhist, so I believe in karma. I don’t want to steal anyone’s money.”

—A man forced to work in an online scam center in Myanmar recounts his experience to Nikkei.

One more thing

China wants to restore the sea with high-tech marine ranchesA short ferry ride from the port city of Yantai, on the northeast coast of China, sits Genghai No. 1, a 12,000-metric-ton ring of oil-rig-style steel platforms, advertised as a hotel and entertainment complex.Genghai is in fact an unusual tourist destination, one that breeds 200,000 “high-quality marine fish” each year. The vast majority are released into the ocean as part of a process known as marine ranching.The Chinese government sees this work as an urgent and necessary response to the bleak reality that fisheries are collapsing both in China and worldwide. But just how much of a difference can it make? Read the full story.

—Matthew Ponsford

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

  • You can easily make ice cream at home with just two ingredients. + Pink Floyd fans, this lecture is for you. + Lose yourself for a few minutes in the story behind an ancient Indian painting. (NYT $)+ Remember the days of idly surfing the web? Here’s how you can still recreate them.

From MIT Technology Review via this RSS feed

3
 
 

California’s statewide power grid operator is poised to become the first in North America to deploy artificial intelligence to manage outages, MIT Technology Review has learned.

“We wanted to modernize our grid operations. This fits in perfectly with that,” says Gopakumar Gopinathan, a senior advisor on power system technologies at the California Independent System Operator—known as the CAISO and pronounced KAI-so. “AI is already transforming different industries. But we haven’t seen many examples of it being used in our industry.”

At the DTECH Midwest utility industry summit in Minneapolis on July 15, CAISO is set to announce a deal to run a pilot program using new AI software called Genie, from the energy-services giant OATI. The software uses generative AI to analyze and carry out real-time analyses for grid operators and comes with the potential to autonomously make decisions about key functions on the grid, a switch that might resemble going from uniformed traffic officers to sensor-equipped stoplights.

But while CAISO may deliver electrons to cutting-edge Silicon Valley companies and laboratories, the actual task of managing the state’s electrical system is surprisingly analog.

Today, CAISO engineers scan outage reports for keywords about maintenance that’s planned or in the works, read through the notes, and then load each item into the grid software system to run calculations on how a downed line or transformer might affect power supply.

“Even if it takes you less than a minute to scan one on average, when you amplify that over 200 or 300 outages, it adds up,” says Abhimanyu Thakur, OATI’s vice president of platforms, visualization, and analytics. “Then different departments are doing it for their own respective keywords. Now we consolidate all of that into a single dictionary of keywords and AI can do this scan and generate a report proactively.”

If CAISO finds that Genie produces reliable, more efficient data analyses for managing outages, Gopinathan says, the operator may consider automating more functions on the grid. “After a few rounds of testing, I think we’ll have an idea about what is the right time to call it successful or not,” he says.

Regardless of the outcome, the experiment marks a significant shift. Most grid operators are using the same systems that utilities have used “for decades,” says Richard Doying, who spent more than 20 years as a top executive at the Midcontinent Independent System Operator, the grid operator for an area encompassing 15 states from the upper Midwest down to Louisiana.

“These organizations are carved up for people working on very specific, specialized tasks and using their own proprietary tools that they’ve developed over time,” says Doying, now a vice president at the consultancy Grid Strategies. “To the extent that some of these new AI tools are able to draw from data across different areas of an organization and conduct more sophisticated analysis, that’s only helpful for grid operators.”

Last year, a Department of Energy report found that AI had potential to speed up studies on grid capacity and transmission, improve weather forecasting to help predict how much energy wind and solar plants would produce at a given time, and optimize planning for electric-vehicle charging networks. Another report by the energy department’s Loan Programs Office concluded that adding more “advanced” technology such as sensors to various pieces of equipment will generate data that can enable AI to do much more over time.

In April, the PJM Interconnection—the nation’s largest grid system, spanning 13 states along the densely populated mid-Atlantic and Eastern Seaboard—took a big step toward embracing AI by inking a deal with Google to use its Tapestry software to improve regional planning and speed up grid connections for new power generators.

ERCOT, the Texas grid system, is considering adopting technology similar to what CAISO is now set to use, according to a source with knowledge of the plans who requested anonymity because they were not authorized to speak publicly. ERCOT did not respond to a request for comment.

Australia offers an example of what the future may look like. In New South Wales, where grid sensors and smart technology are more widely deployed, AI software rolled out in February is now predicting the production and flow of electricity from rooftop solar units across the state and automatically adjusting how much power from those panels can enter the grid.

Until now, much of the discussion around AI and energy has focused on the electricity demands of AI data centers (check out MIT Technology Review’s Power Hungry series for more on this).

“We’ve been talking a lot about what the grid can do for AI and not nearly as much about what AI can do for the grid,” says Charles Hua, a coauthor of one of last year’s Energy Department reports who now serves executive director of PowerLines, a nonprofit that advocates for improving the affordability and reliability of US grids. “In general, there’s a huge opportunity for grid operators, regulators, and other stakeholders in the utility regulatory system to use AI effectively and harness it for a more resilient, modernized, and strengthened grid.”

For now, Gopinathan says, he’s remaining cautiously optimistic.

“I don’t want to overhype it,” he says.

Still, he adds, “it’s a first step for bigger automation.”

“Right now, this is more limited to our outage management system. Genie isn’t talking to our other parts yet,” he says. “But I see a world where AI agents are able to do a lot more.”


From MIT Technology Review via this RSS feed