A brand new academic program developed at MIT goals to show U.S. Air and Space Forces personnel to know and utilize artificial intelligence technologies. In a recent peer-reviewed study, this system researchers found that this approach was effective and well-received by employees with diverse backgrounds and skilled roles.
The project, which was funded by the Department of the Air Force–MIT Artificial Intelligence Accelerator, seeks to contribute to AI educational research, specifically regarding ways to maximise learning outcomes at scale for people from quite a lot of educational backgrounds.
Experts in MIT Open Learning built a curriculum for 3 general kinds of military personnel — leaders, developers, and users — utilizing existing MIT educational materials and resources. In addition they created recent, more experimental courses that were targeted at Air and Space Forces leaders.
Then, MIT scientists led a research study to research the content, evaluate the experiences and outcomes of individual learners throughout the 18-month pilot, and propose innovations and insights that may enable this system to eventually scale up.
They used interviews and several other questionnaires, offered to each program learners and staff, to judge how 230 Air and Space Forces personnel interacted with the course material. In addition they collaborated with MIT faculty to conduct a content gap evaluation and discover how the curriculum may very well be further improved to handle the specified skills, knowledge, and mindsets.
Ultimately, the researchers found that the military personnel responded positively to hands-on learning; appreciated asynchronous, time-efficient learning experiences to slot in their busy schedules; and strongly valued a team-based, learning-through-making experience but sought content that included more skilled and soft skills. Learners also desired to see how AI directly applied to their day-to-day work and the broader mission of the Air and Space Forces. They were also thinking about more opportunities to have interaction with others, including their peers, instructors, and AI experts.
Based on these findings, which this system researchers recently shared on the IEEE Frontiers in Education Conference, the team is augmenting the academic content and adding recent technical features to the portal for the following iteration of the study, which is currently underway and can extend through 2023.
“We’re digging deeper into expanding what we expect the opportunities for learning are, which might be driven by our research questions but additionally from understanding the science of learning about this type of scale and complexity of a project. But ultimately we’re also attempting to deliver some real translational value to the Air Force and the Department of Defense. This work is resulting in a real-world impact for them, and that is admittedly exciting,” says principal investigator Cynthia Breazeal, who’s MIT’s dean for digital learning, director of MIT RAISE (Responsible AI for Social Empowerment and Education), and head of the Media Lab’s Personal Robots research group.
Constructing learning journeys
On the outset of the project, the Air Force gave this system team a set of profiles that captured educational backgrounds and job functions of six basic categories of Air Force personnel. The team then created three archetypes it used to construct “learning journeys” — a series of coaching programs designed to impart a set of AI skills for every profile.
The Lead-Drive archetype is a person who’s making strategic decisions; the Create-Embed archetype is a technical employee who’s implementing AI solutions; and the Facilitate-Employ archetype is an end-user of AI-augmented tools.
It was a priority to persuade the Lead-Drive archetype of the importance of this program, says lead creator Andrés Felipe Salazar-Gomez, a research scientist at MIT Open Learning.
“Even contained in the Department of Defense, leaders were questioning if training in AI is value it or not,” he explains. “We first needed to alter the mindset of the leaders so that they would allow the opposite learners, developers, and users to undergo this training. At the top of the pilot we found they embraced this training. That they had a special mindset.”
The three learning journeys, which ranged from six to 12 months, included a mix of existing AI courses and materials from MIT Horizon, MIT Lincoln Laboratory, MIT Sloan School of Management, the Computer Science and Artificial Intelligence Laboratory (CSAIL), the Media Lab, and MITx MicroMasters programs. Most educational modules were offered entirely online, either synchronously or asynchronously.
Each learning journey included different content and formats based on the needs of users. As an example, the Create-Embed journey included a five-day, in-person, hands-on course taught by a Lincoln Laboratory research scientist that offered a deep dive into technical AI material, while the Facilitate-Employ journey comprised self-paced, asynchronous learning experiences, primarily drawing on MIT Horizon materials which might be designed for a more general audience.
The researchers also created two recent courses for the Lead-Drive cohort. One, a synchronous online course called The Way forward for Leadership: Human and AI Collaboration within the Workforce,developed in collaboration with Esme Learning, was based on the leaders’ desire for more training around ethics and human-centered AI design and more content on human-AI collaboration within the workforce. The researchers also crafted an experimental, three-day, in-person course called Learning Machines: Computation, Ethics, and Policy that immersed leaders in a constructionist-style learning experience where teams worked together on a series of hands-on activities with autonomous robots that culminated in an escape-room style capstone competition that brought every little thing together.
The Learning Machines course was wildly successful, Breazeal says.
“At MIT, we learn by making and thru teamwork. We thought, what if we let executives find out about AI this manner?” she explains. “We found that the engagement is way deeper, they usually gained stronger intuitions about what makes these technologies work and what it takes to implement them responsibly and robustly. I feel that is going to deeply inform how we take into consideration executive education for these sorts of disruptive technologies in the long run.”
Gathering feedback, enhancing content
Throughout the study, the MIT researchers checked in with the learners using questionnaires to acquire their feedback on the content, pedagogies, and technologies used. In addition they had MIT faculty analyze each learning journey to discover educational gaps.
Overall, the researchers found that the learners wanted more opportunities to have interaction, either with their peers through team-based activities or with faculty and experts through synchronous components of online courses. And while most personnel found the content to be interesting, they desired to see more examples that were directly applicable to their day-to-day work.
Now within the second iteration of the study, researchers are using that feedback to reinforce the educational journeys. They’re designing knowledge checks that shall be a component of the self-paced, asynchronous courses to assist learners engage with the content. Also they are adding recent tools to support live Q&A events with AI experts and help construct more community amongst learners.
The team can also be trying to add specific Department of Defense examples throughout the academic modules, and include a scenario-based workshop.
“How do you upskill a workforce of 680,000 across diverse work roles, all echelons, and at scale? That is an MIT-sized problem, and we’re tapping into the world-class work that MIT Open Learning has been doing since 2013 — democratizing education on a world scale,” says Maj. John Radovan, deputy director of the DAF-MIT AI Accelerator. “By leveraging our research partnership with MIT, we’re capable of research the optimal pedagogy of our workforce through focused pilots. We’re then capable of quickly double down on unexpected positive results and pivot on lessons learned. That is the way you speed up positive change for our airmen and guardians.”
Because the study progresses, this system team is sharpening their give attention to how they will enable this training program to succeed in a bigger scale.
“The U.S. Department of Defense is the biggest employer on this planet. Relating to AI, it is admittedly vital that their employees are all speaking the identical language,” says Kathleen Kennedy, senior director of MIT Horizon and executive director of the MIT Center for Collective Intelligence. “However the challenge now’s scaling this in order that learners who’re individual people get what they need and stay engaged. And it will actually help inform how different MIT platforms could be used with other kinds of large groups.”