Tech

Howard University Students Win Inaugural Microsoft AI Policython During Congressional Black Caucus Week



During the 2025 Congressional Black Caucus Week in Washington, DC, five Howard University students earned first place in the inaugural Microsoft AI Policython.

According to a university news release, the event, hosted by the Black at Microsoft DMV Chapter, brought together students from Howard University, the University of the District of Columbia, and Coppin State University to develop and present policy solutions for real-world issues involving artificial intelligence.

The Howard team, known as Truth and Service Solutions Inc., included junior psychology major Janeen Louis, junior political science and economics major Fatumata Dia, senior computer science major Kyla Hockett, junior computer science major Soluchi Fidel-Ideabuchi, and senior mathematics major Sydney Helstone, per the release. The team was supported by Dr. Talitha Washington, executive director of Howard’s Center for Applied Data Science and Analytics.

Over the course of the competition, students worked with Microsoft mentors to identify an AI-related issue, conduct research, draft a policy response, and present it to a panel of experts.

“I hope students leave the experience with a deeper appreciation for how policy and innovation intersect in shaping the future of artificial intelligence,” Washington said in the release. “Beyond the competition, I want them to have agency and see themselves as leaders in responsible AI technology and policy innovation. My goal is for them to build both confidence and capacity to create AI technology that makes our world a better place.”

The team’s case involved a bank and a university that developed an AI budgeting app for students. The app gave faulty advice that caused users to overdraft their accounts. The students were asked to determine whether the tool should be paused, revised, or replaced, and who should be responsible for the financial losses.

According to the release, the team concluded that the solution depended on whether the app’s terms included a clear disclaimer about financial risks. Their proposal addressed ethical, safety, and financial concerns, recommending that the app be labeled for advisory use only, that overdraft protections be added for student users, and that a neutral auditor be involved. They also suggested short, interactive training videos to help users understand the app’s features and risks.

Dia said the videos were designed to reach users who often skip lengthy terms and conditions.

“Most people — all people I think — don’t read it; you just press agree and you don’t know what the phone is going to do with your information or your data,” she said. “We incorporated little fun trainings that included tool tips, small videos to ensure that the user or the students know exactly what the app is doing throughout the entirety of its use.”

While the judges praised their ideas, they noted that some recommendations might be costly to implement. So the students were encouraged to keep that in mind.

Each member of the Howard team drew from personal experience with AI and agreed regulation was needed, according to the release. Hockett shared that during her internship at Deloitte, the use of AI tools increased significantly.

“Last year I didn’t really use AI that much, and this year it was heavily pushed that I do,” she said. “All my coworkers and some interns and some people above me, they were saying, ‘oh, let’s use AI to make the PowerPoints. Let’s use AI to get this document or write this document.’ I think my entire project plan for the summer was written up via AI. I had mixed feelings about that. I think we can do a lot of this just by ourselves.”

Louis added that a freshman-year class on AI ethics and bias sparked her interest in AI policy. She noted that AI is already influencing industries such as mental health, where AI-driven tools are emerging to assist therapists and clients. However, Louis emphasized the importance of humans helping humans when it comes to therapy.

Events like the Microsoft AI Policython bring attention to the need for collaboration to make sure AI is used responsibly. Microsoft’s Responsible AI Standard, according to the company, outlines principles of fairness, reliability, transparency, and accountability, which the company integrates into its products and partnerships.

For the Howard team, the Policython provided hands-on experience in developing solutions that balance innovation with responsibility.



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button