Nine years after The New Yorker magazine labeled Stanford “Get Rich U,” Silicon Valley’s allure remains powerful for many students.
The number of undergraduate majors in Computer Science has nearly quadrupled since 2010, and hackathons are almost as easy to come by as fraternity parties. When Facebook, Microsoft or Google pay over $12,000 for a table at a Stanford career fair, the return on investment is assured. Their famous brand names, not to mention the six-figure starting salaries and amenities-rich work environments, are certain to attract large crowds of talented job candidates.
But then there are students like Hannah Mieczkowski, Alyssa Romanos and Arnob Das, whose appreciation for the positives of technology is tempered by concerns over ethics and corporate cultures in the Valley.
Mieczkowski, a doctoral candidate in Communication, recently declined an offer to interview with Google for a research internship, citing the firing of artificial intelligence researcher Timnit Gebru “as indicative of a larger pattern of unjust behavior.” While Facebook has seen the most substantial decrease in the acceptance of job offers from graduates of top-ranked schools, according to CNBC, students have taken to social media to express a wider discontent with the industry — on issues ranging from the treatment of underrepresented communities to the development of potentially harmful technologies.
Gebru, a pioneer in the field of ethical AI, received her Ph.D. from Stanford, as did Google founders Sergey Brin and Larry Page. On Dec. 2, 2020, Gebru announced that Google had fired her after she criticized the company in an internal email and said she was blocked from publishing her latest studies on bias in AI systems. CEO Sundar Pichai, in an email to employees on Dec. 9, acknowledged that “we need to accept responsibility for the fact that a prominent Black, female leader with immense talent left Google unhappily,” and he promised to review the factors that led to her departure.
Although the mathematics governing a computer program are not inherently biased, the “training” data that allow a machine learning model to develop can be. For example, before an algorithm can be effective at “guessing” who is in a photo, it needs to collect and process information on what a human face looks like. If the algorithm is disproportionately exposed to people of a particular racial or ethnic group, it will optimize for those types of faces through pattern recognition.
Mieczkowski has studied the adverse effects of algorithms. Prior to the recent hiring of data scientist and social scientist Rumman Chowdhury, Twitter faced harsh criticism for its image cropping algorithm. “There were many instances in which Twitter crops images, or did, because they wanted them all to be the same size or something of that nature,” Mieczkowski said. “And so, if you have like a long photo or a really wide photo, it would crop a specific way.”
Nine times out of 10, she said, the images were automatically cropped upon upload to the website, and often problematically so. “If there was a white person and multiple Black people, it would crop it so it would show you only the white person,” she added, also noting that there were repeated instances in which Twitter would crop the chest area of women’s bodies without user input. “Why does there need to be an algorithm at play here at all?” Mieczkowski asked.
Twitter does not automatically crop photos anymore. Mieczkowski underscored this update as “a really good instance of making a product better, because it’s serving the people who would be the most screwed over by it.”
But what she called the “most invigorating aspect of change in Silicon Valley, and across the country,” are signs of labor organizing inside technology companies. She cited the Alphabet Workers Union (AWU) — named for the parent company of Google — a rare creation in an industry historically resistant to the unionization of its white-collar workforce.
Mieczkowski has been learning about these matters, in part, from the online presence of Meredith Whittaker, a New York University researcher who works at the AI Now Institute. Whittaker often illustrates how companies are reliant on their workers, and how withholding one’s labor can be a powerful act.
Stanford junior Alyssa Romanos is preparing to intern in the technology industry for the first time. She will be writing software at Gusto, a San Francisco-based startup known for its modern approach to payroll, benefits and human resources for small businesses.
Through the interview process, Romanos found Gusto to be “mission-driven,” a meaningful factor to her. Romanos was a student in CS 182W, a course titled “Ethics, Public Policy, and Technological Change,” for which Mieczkowski was a teaching assistant. She also participated in Stanford-sponsored community service through the Cardinal Quarter program.
“It’s still pretty different from any other work I’ve done, but for me, it still feels like an important mission,” she said of Gusto. “They’re helping small businesses, especially at a time like this. And I did decline some Big Tech companies in favor of this one.”
An entry-level software engineer at a top Internet company can expect a salary of $140,000 per year upon graduation, often with perks such as free food in the office by Michelin-starred chefs, walking trails, massage specialists and juice bars. To Arnob Das, something about this seemed too good to be true when he recently visited Google’s Mountain View headquarters as a prospective intern.
“They have their showers, their beds and dinners set up and everything, and you never leave,” he said. “There’s no separation between productivity and your life. I think there’s more to life than working all the time.”
Das, a junior at Stanford, describes his origin story as “all over the place.” In fifth or sixth grade, he rapidly developed an interest in drug supplements from pharmaceutical stores in his hometown of Portland. Modern medicine at the time, he found, was focusing on less than half of what needed to be studied.
“There’s 33 trillion bacteria and less than 30 trillion human cells,” Das said. “So I was really interested in how we could take a holistic approach to disease through interfacing between systems.”
He started with attempts to mimic the chemistry of the human microbiota in his family’s garage. One science project led to another, and by the time high school came around, Das found himself at the Regeneron International Science and Engineering Fair in 2015. There, he formed the first connections of his professional network. “After cold calling or emailing a lot of professors, I eventually got some lab space,” he said. “I did some projects on free radical detection — these organic magnets, if you will — that could be used for health therapy, and I did quantum-computing-based projects, so eventually it really started escalating.”
He secured a spot in a program at Global Good, a social-impact startup accelerator backed by the Gates Foundation and Intellectual Ventures, which set him up to receive capital from the Beverly Hills-based firm Kairos Ventures. From there, he co-founded Hex Labs in June 2016. It’s a startup designed to develop new database technologies for medicine, quantum computing and energy, and he continued to run it as he entered college.
Hex Labs was successful enough to be deemed one of the top 50 emerging companies in 2019 by Inc. magazine. But even then, Das had conflicts about what he was doing. As he continued to develop the startup, he noticed some fellow entrepreneurs “cutting a lot of corners” and making morally questionable choices. For example, one project that he worked on could be weaponized for railgun technology, which he learned after the fact.
“So I was seeing how even philanthropic efforts are not clean,” he said. Next time around, autonomy in his work will be a bigger priority for Das.
In January of this year, he dissolved Hex Labs and now is working toward an individually designed major in molecular engineering, as well as a minor in feminist, gender and sexuality studies. He learned a lot in the process about the value systems that drive a company’s culture.
Computer Science major declarations at Stanford started to rise significantly in 2007, according to an April report from the Tech History Project. Between 2010 and 2020, the number of conferred bachelor’s degrees in CS swelled from 86 (5% of the senior class) to 307 (17%). The number of conferred master’s degrees in CS doubled, and the number of doctorates increased slightly. Among types of classes, those related to data science and interdisciplinary technology saw the greatest growth in popularity.
In the 250-page Tech History Project report, a team of students led by seniors Nik Marda and Julia Ingram narrates the good, the bad and the ugly of Stanford’s role in technology and public policy over the past decade. The report includes several recommendations to the leaders of tomorrow. They are centered around maintaining Stanford’s status as an innovation hub, increasing diversity and inclusion, and creating more technology with ethics and the public interest in mind.
Marda came into college thinking that he was going to get a degree in CS, maybe explore some politics, and then pursue a career in Silicon Valley — “the classic Stanford path,” as he put it. Then came the Facebook–Cambridge Analytica data privacy scandal of 2018. Marda said that he remembers watching Mark Zuckerberg testify before Congress. He saw it as the intersection of his interests. “It was also very clear from that hearing that we have a lot of work to do, not just to properly regulate technology, but to have the right conversations around it in the halls of Congress and in civil society,” he said.
Because technology is interwoven with many facets of society, Marda said, an interdisciplinary and democratic approach is necessary for an ideal future. “We’re talking about people’s identities, their data, often very private information,” he said. “And historically, we have created regulation or created laws to govern how that type of information is protected. We quite literally have constitutional amendments that get right at this.”
Even though technology has proven to evolve faster than government, Marda is not discouraged. “We can learn from one example, and that is creating independent agencies, instead of having Congress try to constantly keep up with telecommunications policy,” he said. “A lot of that is devolved to the Federal Communications Commission. And so you can imagine independent agencies like the FCC serving as a model for data privacy regulation.”
Marda’s interests had been “vague” but eventually started to crystallize at the intersection of technology and government. He spent the summer building technology for the government. “But I realized that it wasn’t exactly the skills that I wanted to be using,” he said. “I was actually a little bit more interested in government for technology, rather than building technology for government.”
In the middle of his pivot, he ran into Constanza Hasselmann, the founder of the Public Interest Technology (PIT) Lab, a student organization centered around public-interest technology at Stanford. In the next year, he joined her in co-leading the organization.
“The number of students I see that are interested in, for example, technology policies, at least empirically, is much larger now than I thought it was in 2017–18, when I felt like I was sort of grasping at whatever opportunities I could find,” Marda said. “And I think now there’s a lot more support, whether it’s from organizations like PIT Lab, or from the Stanford Cyber Policy Center, or all sorts of organizations for students to start engaging with work.”
The popularity of Big Tech jobs is still strong, but conscientious objection is becoming more common among youth, with echoes of the past, according to Katie Creel, an Embedded EthiCS fellow at Stanford. “If you go back to the ’60s and ’70s, it was extremely common for applied mathematicians and early computer scientists to be very public and vocal about projects they wouldn’t work on, whether that was defense contracting or nuclear projects or other kinds of things that they thought were unethical,” Creel said.
“I think in the scope of ethics and technology it’s maybe the early blush of the Internet when there was a sense that these new companies were not doing anything wrong, or couldn’t do anything wrong — that’s more unusual,” she said.
The newly developed Embedded EthiCS program was introduced in three courses this past school year, featuring guest lectures by Creel. Now back to “a more reasonable state of affairs,” she added, a growing number of professionals in Silicon Valley are thinking about their values and the big-picture impact they want to have on the world.