What Artificial Intelligence Can’t See
After important roles at some of big tech’s biggest firms, Hong Qu ’99 is training a critical lens on the ethical implications of our technological future.
By Lauren Rubenstein
When Hong Qu graduated from Wesleyan in 1999, the technological landscape was a very different place than it is today. AOL and Microsoft were battling for online supremacy. Most folks were still accessing the web via dial-up connections. Handheld devices hadn’t yet taken hold. Social media was still many years away—five for Facebook, seven for Twitter, 11 for Instagram.
In the years since, Qu has not only seen the rapid proliferation of internet usage and social networking. He played critical roles at some of the companies that went on to shape the very way we experience the virtual world today. A member of YouTube’s founding team, Qu has also worked at tech companies such as Google and news organizations including Upworthy and Fusion. Currently, Qu is a first-year PhD student at Northeastern University, an adjunct lecturer at Harvard Kennedy School teaching data visualization, and a practitioner fellow in race and technology at Stanford University. He describes his current work as “learning and teaching about data, networks, and equity in tech.”
A double major in East Asian studies and economics at Wesleyan, Qu went on to serve on the University’s Board of Trustees from 2016 to 2019. Below, he discusses how the internet and technology have evolved in the last 20 years, and some important ethical considerations for their future and our own.
You’ve held important positions at a number of well-known organizations in the technology and media fields. What’s been the common thread in your career progression?
Hong Qu: My career has been a zigzag. After graduating, I made my way out to the West Coast to get my master’s degree at the University of California, Berkeley, and spent 10 years in the start-up world. This had been my dream ever since I was a kid using computers in the late 1980s and saw credits for the software engineers on the start-up screen. I wanted to see my name there too. After leaving Google in 2009, I spent 10 years in journalism, mentored by Alberto Ibargüen ’66, P’97, helping modernize news organizations that had to bring news directly to consumers instead of hoping they’d go look for news on destination sites or appointment TV. Now, I suppose, I’m into my next 10-year trajectory in academia: I’m starting my PhD at the Northeastern University Network Science Institute where my advisor is a Wesleyan alumnus, David Lazer ’88, a distinguished political scientist. I study alongside another Wesleyan alumna, Adina Gitomer ’20, who is also a first-year PhD student in the Lazer Lab.
The common thread is that I look ahead to the big problems the world faces and try to position myself in a way that I can have the biggest impact. I’ve been able to do this in my work at some large organizations that have powerful platforms and are serious about creating solutions. As an academic, I plan to be very engaged in public discourse and policy to have a positive impact on society.
When did you first become exposed to issues of inequity in tech?
H.Q.: My family immigrated to New York City from China in the mid-’80s when I was in third grade. Back then, no one had computers, but I was fortunate to have access through my sister, who got into MIT and left her PC at home. My teachers in middle school were so impressed by my book reports decorated with clip art. When I attended Wesleyan from 1995 to 1999, a time when most people were still using AOL dial-up, we were lucky to have broadband internet on campus. Because of this, I was able to teach myself HTML and got summer internships and jobs working for dotcoms.
When I graduated from Wesleyan, the first thing I did was to volunteer in disadvantaged communities in New York City to train local residents in basic office software and website design. Back then, we used the term the “digital divide.” This is still very much a problem today, whether it’s students who can’t log in to access their virtual classroom or people trying to register for COVID vaccine appointments without connectivity. Policymakers are working to fill these infrastructure gaps such as universal broadband and algorithmic bias, and I hope to contribute along the way.
How does bias enter into AI?
H.Q.: I believe inequities in tech become more direct and pernicious as you move up the layers of the network—from the physical layer to the software layer to the application layer to the content layer. In my master’s program at Berkeley, for example, I learned about how any computer code has a lot of assumptions built in. Any time you write code to do something, it’s representing your worldview and values, as well as norms and historical baggage in society. When I worked at YouTube, I would write code to recommend which videos to watch next or what channels to subscribe to. These types of algorithms are based on probabilistic models which predict what a user might be interested in. The algorithms might be super complex, but at the end of the day, the inequity inherent in codifying these rules in software code has real, high-stakes impact on people’s lives and livelihood, such as peddling confirmation bias and extremism.
One of my biggest criticisms of tech companies is they create negative externalities that become a burden—on parents, on police officers, on psychological counselors, and on journalists. These social service providers need to respond to and debunk disinformation and mass delusion in the current political climate. My research into data, networks, and tech equity aims to understand, anticipate, and find interventions to keep ahead of these negative impacts on society. So much to fix!
Your current project at Stanford considers the impact of AI on fair lending laws and practices. What drew you to this topic?
H.Q.: When I was in high school, my parents—who are immigrants and don’t speak English very well—had their identities stolen and had thousands of dollars in debt charged in their names. I spent many, many hours on the phone trying to re-establish their credit. That was a nightmare, and I know it happens to a lot of people. After Wesleyan, I tried to start a small business myself and found it nearly impossible to get a small business loan because they look at your personal credit and income level.
In thinking about issues like social mobility and economic justice, having access to capital is one of the biggest drivers of wealth creation and equitable economic outcomes. While there are laws on the books from the 1960s and ’70s that protect consumer rights, the credit scoring system itself has a lot of biases and can be manipulated by lenders for their own benefits. Just look at the fallout from the subprime mortgage crisis. Many companies don’t want to reveal their criteria because that might enable people to reverse engineer it. Also, how the data is collected is very opaque and no one knows if it violates consumers’ privacy. My research strives to make the financial system more inclusive as measured by social mobility. In many aspects of our lives, algorithms are becoming the new gatekeepers and we must ensure that they are fair and equitable and hold “them” accountable.
As AI increasingly touches every aspect of our lives, do you believe it will ultimately be a force for good or evil?
H.Q.: I believe that with a strong and thoughtful governance framework, AI can be used for good. If you think about some of the government policies regarding procurement or mortgages, for example, they do give preference to, say, people with veteran status or minority-owned businesses. Just as AI might perpetuate the status quo and exacerbate inequalities, it could be tuned for greater equity in the long run if a purpose-driven governance is in place. Some legal scholars are advocating for affirmative action algorithms. It’s really up to society and policymakers to steer our path towards an equity-centered participatory AI.
How did your Wesleyan education influence your thinking on these matters?
H.Q.: Beginning with my new student orientation, Wesleyan taught me to be conscious of existing social hierarchies and power imbalances. My junior year, I took a class called “The Harlem Renaissance” with Professor Gayle Pemberton in African American studies, which was a big awakening for me on race issues. That class gave me a sense that, as a minority living in the United States, I belong and can play a big role in creating this American culture, not just assimilating to become more like whatever the dominant culture is. It helped me celebrate the uniqueness of my own identity.
In that class we read W.E.B. Dubois’s writing on “double consciousness.” He described how minorities must always think about the prejudices held by other people with whom they interact, and wonder whether others perceive them with either respect or stigma. As a Chinese-American immigrant child, my family instilled fortitude and forbearance to brush off endless taunts and prejudices by staying rooted in my identity and ideals; nonetheless, the recent surge in anti-Asian violence necessarily compels everyone to speak up and speak out against bigotry of any kind.
For me, how a multiracial, pluralistic American identity can come together is the journey, which feels more invigorating than the destination. That’s the struggle you see today in political debates about race and inequality in America—everywhere you turn, hyperactive discourse in entangled networks causing social upheaval. The whole world is watching closely to see how all Americans reckon with our past and come together to live up to the highest forms of our democratic principles.
Top: Hong Qu Photo © Belfer Center for Science and International Affairs
* * *
Sidebar: What kind of AI are we raising?
Through the Berkman Klein Center and MIT Media Lab’s 2019 Assembly program, Qu partnered with three other experts to create the AI Blindspot project, which aims to promote responsible design and use of AI systems. The team first produced a nine-card deck to help technical teams identify biases and structural inequalities in artificial intelligence systems.
They then expanded the project for greater impact to reach a civil society audience. This posed the challenge of adapting a framework that had been created for a technical audience to something that would be accessible and exciting for a general audience. In partnership with The Consentful Tech Project and And Also Too, they developed an art and storytelling approach to AI Blindspots called “What kind of AI are we raising?” It imagines AI as a “child-like being” who reflects its environment and the people around it, imbuing the surrounding values, priorities, and biases. AI must be raised by a diverse group of stakeholders in consideration of factors such as representation, privacy, explainability, oversight, and accountability. To that end, AI Blindspot developed a set of tools for advancing equity in AI systems through civic campaigns, popular education, and local, national, and international advocacy and organizing.