On July 22, 2025, President Donald Trump unveiled a national Artificial Intelligence Action Plan, outlining a new direction for how the federal government will regulate, support, and deploy artificial intelligence technologies. The policy marks a significant shift in the nation’s AI strategy, placing strong emphasis on accelerating innovation and reducing regulatory oversight—particularly in areas like national defense, law enforcement, and private-sector development.
While the administration argues that the plan is necessary to ensure the United States maintains a competitive edge globally, especially against countries like China, many civil rights and technology advocacy organizations have expressed concern over its implications.
What’s in the AI Action Plan?
The plan revokes the 2023 executive order on AI established under the Biden administration, which had prioritized ethical safeguards, transparency, and public engagement. In its place, the Trump administration’s framework:
- Reduces federal regulations that could slow AI development;
- Encourages the rapid expansion of AI in national security and law enforcement;
- Increases collaboration with private industry;
- Offers little detail on how AI-related risks will be monitored or addressed.
The Response: Civil Society Pushes Back
Shortly after the announcement, a coalition of over 80 organizations—including civil rights groups, academic researchers, and digital policy advocates—released The People’s AI Action Plan. This alternative framework calls for:
- Greater transparency in how AI systems are developed and used;
- Mandatory oversight to prevent bias and discrimination;
- Inclusive policymaking that involves educators, youth, and impacted communities;
- Ethical safeguards to ensure AI technologies protect civil liberties and privacy.
Their concern is that the federal government’s new direction lacks meaningful public accountability and could allow powerful institutions to deploy AI in ways that negatively impact vulnerable populations.
Why It Matters to Young People
Artificial intelligence already plays a major role in shaping the digital spaces where teens spend their time—from personalized content feeds on social media to algorithmic decisions used in college admissions, hiring, and education. These tools often operate invisibly and without clear explanation, and without adequate oversight, they risk amplifying existing inequalities.
Young people are also the future workforce, voters, and civic leaders who will inherit the consequences of today’s AI policies. Ensuring that AI systems are fair, ethical, and accountable is not just a technical issue—it’s a democratic one.
How Students Can Stay Engaged
- Understand the issues: Learn how AI systems work and how they influence media, education, and public life.
- Ask questions: Inquire how your school, community, or local government is using AI tools.
- Stay informed: Follow developments in AI policy and read from a range of credible sources.
- Support responsible innovation: Advocate for policies that promote transparency, public input, and ethical standards.
Final Thought
As artificial intelligence becomes more integrated into the systems that shape our daily lives, it is essential that its development be guided by principles of fairness, equity, and accountability. At Media Savvy Teens, we believe young people should have a seat at the table in shaping the future of technology—not only as users but as informed participants in public discourse and policy.


Leave a comment