AI
By 2026, generative AI will shift from optional experimentation to a core expectation in business education. The schools that stand out will be the ones pairing broad access with strong safeguards and clear standards for responsible use. With BlueChipAI, we can discuss how UNC Kenan-Flagler Business School is integrating AI into teaching, improving efficiency for staff and faculty and setting clear norms for ethical, secure use of AI in coursework, research and day-to-day academic work. These efforts are transforming how UNC Kenan-Flagler is preparing students for the workplace of the future. UNC Kenan-Flagler is modeling how academia and industry can work together to shape informed, forward-thinking decisions about generative AI use.
— Paul Wolff, director of the Faculty Consulting Group at UNC Kenan-Flagler Business School

AI, productivity and the future of work
As someone who studies open innovation, crowds and the future of work, I’m unpacking where AI is genuinely improving productivity, where organizations are still struggling to translate experimentation into impact and what a more execution-focused next phase of AI could mean for firms, workers and markets.
— Arvind Malhotra, professor at UNC Kenan-Flagler Business School

AI
It’s somewhat foolish to make predictions about AI, a technology that is as opaque as it is transformative. But if I had to bet, I’d say that 2026 will be the year the bubble bursts. I hope I’m wrong, but I fear I’m not.
— Scott Geier, teaching assistant professor at UNC Hussman School of Journalism and Media

AI law and regulation
The clock on AI regulation is significantly and unconstitutionally slowing. Efforts in the European Union are facing friction in the rollout of the AI Act, and there is also significant resistance from the White House, where President Donald Trump has announced by executive order his intent to stop U.S. states from enacting and enforcing their own regulation. It’s also far from clear that regulating AI harms will address the growing power and influence of AI companies over the state and citizens. Whether we’re looking at automating more labor, collecting and analyzing more biometric data or training AI systems on others’ intellectual property, regulation is just one answer to the growing imbalances in power that AI generates and the growing chasm between the U.S. and its allies in terms of policy and regulation. 2026 will likely be a turning point in the discussion about the role law can and will play; there is no question that many of us are deeply concerned whether regulation alone can and will preserve democratic norms.
— Tori Ekstrand, professor at UNC Hussman School of Journalism and Media

How online search engine AI overviews influence searches about politics
As people’s search patterns change, a key fact remains the same — our starting points, like our keywords or our prompts, drive the information that is returned to us. Since our language is heavily influenced by our ideological dialects, how we see the world shapes what is ultimately returned to us, reinforcing, rather than broadening, what we already think we know politically.
— Francesca Tripodi, associate professor at the UNC School of Information and Library Science

AI as an infrastructure and community catalyst
In 2026, AI will be less of a “tool” and more of the underlying infrastructure for testing, refining and scaling ideas. Universities that lead will use AI to accelerate research translation, support founders and teach students to use these systems responsibly, with clear guardrails around ethics, bias and data protection. At UNC-Chapel Hill, Innovate Carolina is coordinating a nearly 1,000-member AI Community where students, researchers, staff, entrepreneurs and employers learn and build together, creating a collaborative model that is becoming a blueprint for how regions grow their AI talent.
— Patrick Kastian, assistant director of the Data Intelligence Hub at Innovate Carolina























