“Cataloguing Change: Women, Art and Technology” discusses critical contributions that women made in creating, curating, and teaching new media and computer art. The article begins with early conceptions of digital art through artists such as Molnar, who used the computer directly to “generate visual possibilities.” The article particularly highlights Bell Laboratories as an influential actor, assisting artists such as Schwartz in experimenting with computers and helping her create one of the first computer-generated motion-graphic films. In describing women's history in computer art, the article further discusses the complex relationship between gender and technology. The development of computer science in male-dominated environments made it difficult for female artists to navigate the uneasy gender dynamics of the technology. Technology is not an objective product but a cultural artifact that is given meaning, resulting in its embodiment of patriarchal values. In both past and present, women have been vital contributors to computer art, a fact that contemporary art networks have been continually addressing.
Qs:
1. Is technology bias-free? How does bias play a role in shaping
technologies we use daily?
2. How does AI-generated art impact
creative freedom and originality?
3. Will certain mediums be
impossible for technology to have an impact on?
“Black Gooey Universe” discusses the development of the graphical user interface (GUI) and its inseparable relationship with race. Even before the technology’s inception, Silicon Valley startup culture started as a space of whiteness, one where straight cis white males developed ideas without consequence in their “ivory tower.” This embedded whiteness is reflected in technology itself, highlighted in XeroxPARC’s first rendition of a black native screen GUI, which was later turned white when Apple appropriated the GUI into the Apple Lisa. As technology becomes more ubiquitous, it becomes harder to view it critically and recognize these hegemonic systems within them.
In response, Artist proposes Blackness as an antithesis to screens’ assumed whiteness of screens. In LCD monitors, blackness is the default: it is from this blackness that colors were represented, which is why Apple had to create an inefficient white backdrop for the Lisa. The Black gooey (GUI) can seem slow or broken when the values of white screens are prevalent, but its nuance gives it the power to “destroy a contemporary hegemonic interface.”
Qs:
1. How is hegemony maintained through technology? Why does technology seem
“objective”?
2. In addition to technological systems, how does
racial injustice affect technology distribution?
3. Other than the
GUI, what other technological spaces do we see whiteness embedded within
them?
When thinking about the Internet, computers, and how I utilized them in my childhood, there’s a weird grey space that I cannot remember past. One of my earliest memories of using computers was playing CD video games my parents would get and learning how to type and make PowerPoint presentations at school. The transition from using computers less than 1-2 hours a day to god knows how many hours I am on the Internet now is hard to imagine, which calls into question the ubiquity and naturalization of Internet culture that spread so quickly.
A section of Odell’s article that resonated with me was ‘how-to’ videos and their transition to ‘sponsored how-to’ videos. In my early teenage years, I grew up and learned a lot from these homemade, low-quality ‘how-to’ videos - whether it be game tutorials, magic tricks, or even school work. Nowadays, such videos have evolved to what Odell describes as “hyper-produced” videos, where “every day” content creators centralize their videos around sponsored products, affiliated links, and Amazon storefronts.
Imagining the Internet as an innocent communicative space is hard, especially when the anonymity it provides can heighten hate speech - misogynist trolls and racist subreddits, as Odell mentions. However, when there seem to be fewer chances to establish connections with people around the world due to the commodified and productivity-focused state of the Internet, thinking back on how the Internet used to be can be a good refresher on its potential and help us critically think about its development.
Qs:
1. When is the last time you had an unmediated conversation with someone
on the Internet? Of course, all communication on the Internet is mediated,
but when was the last time you had a truly random encounter with someone?
2. Do you think it’s possible for the Internet to change into a
space with more “randomness” and chance conversations with strangers? Why
or why not?
3. In Odell’s example with Roosh, she explains that it’s
hard to find a “sense of mutual understanding” or “coexistence on the same
plane of reality” when conversing with strangers on the Internet. What
structures may be reinforcing this pattern and behavior?
Lauren McCarthy’s talk at Eyeo 2019 helped me bridge a gap in my understanding of surveillance. I have always viewed and understood surveillance as a form of explicit and implicit control through equipment like surveillance, body, and home cameras, but I had never thought to think of social media as a form of surveillance. Just like McCarthy’s ‘Follower,’ we are essentially signing up to be followed and surveilled when we sign up and post on social media. From this perspective, as McCarthy states, surveillance becomes a luxury experience. It stems from a desire to be seen, and those in privileged positions are able to fulfil that desire by expressing that there is nothing to hide about them. On the other hand, surveillance takes on a different meaning for those within oppressed communities. It becomes a tool that works against them, used for profiling, policing, and control. Another important idea that I resonated with is McCarthy’s explanation that there is a complex but close connection between human labor and how an AI system like Amazon Alexa works. Because we seldom have human-to-human interaction on the Internet or through technological systems – as explored by Odell’s “How to Internet” article – we seem to also forget that humans are the ones creating and managing these systems. Technology like Amazon Alexa does not come into existence out of anywhere; it is created within a small homogenous group of developers where multiple instances of bias and error can be found. With the exponential development of AI systems like ChatGPT and DALL-E 2, this disconnect may be exacerbated.
Qs:
1. What are we losing/sacrificing for the sake of convenience when
we use AI systems like Alexa?
2. Where does the ‘desire to be seen,’
as McCarthy talks about, stem from?
3. How has the disconnect
between technology and the human labor behind it affected issues regarding
accountability? Will this further shift with the continued development of
AI systems?
Low-resolution images or “poor images” are often regarded as useless, inferior versions of high-quality images. When you can search multiple versions of the same image in a matter of seconds, they may seem unnecessary; why search and use low-resolution images when you can search and use high-quality alternatives? However, poor images are not just degraded versions of their higher-quality counterparts; they have their own unique cultural and political significance. The commercialization of cinema and the expansion of audiovisual monopolies have allowed the hegemonic ideology of global capitalism to maintain its power; the commercial circulation of high-quality images leaves no room for imperfections. Poor images, in this sense, are a counter-hegemonic, nonconformist force that resists this force. Poor images are often produced by and for marginalized communities with outdated cameras, computers, or other unconventional forms of distribution that we may find inefficient now. This communal participation in the production of images allows these marginalized communities to create networks and share information in the face of censorship and repression in the form of privatization and privacy. Poor images embrace their imperfections and instead use them as an opportunity to view images outside of a consumerist perspective. It escapes the concept of viewing culture as a commodity and rather views it as a collaborative practice where everyone is both the viewer and the producer.
Qs:
1. How can artists reclaim creativity in an age where art is
commodified?
2. Is there such a thing as an “original image”? How
might the notion of an “original image” differ in terms of digital
culture?
3. How are “poor images” being reproduced now? What are
some attempts at repressing these “poor images”?
As technology is often viewed as a static object that cannot be affected by social and political structures, we tend to detach ourselves from the human labor that went behind creating the technology. We tend to think of modern technology as something invented when a genius just someday thinks of it. However, many factors, such as marketing, media representations, and human subjectivity when creating technology, all influence how we perceive and use it. One aspect that stood out to me the most from Buolamwini talk was when she mentioned the real-life implications of algorithmic bias. The vast dataset on which facial recognition is based is built on systems of inequality; Being mislabelled as someone else on a Facebook post, as Buolamwini mentions, or on your iPhone Photos app seems harmless, but that same technology is used to perpetuate real-life harm. For example, studies have repeatedly shown that marginalized communities – particularly poor, Black and brown communities – are more heavily policed than their counterparts. Thus, when facial recognition technology is used for predictive policing, it disproportionately affects these communities with increased surveillance, policing, and false accusation rates. As facial recognition technology continues to build on surveillance data within these communities, the coded bias within them grows exponentially, ultimately creating a positive loop that exacerbates such social consequences through time.
Qs:
1. How can we resist algorithmic bias?
2. How do we call for
accountability for the harm caused by AI/machine learning technology?
3. Is there a type/piece of technology that equally benefits all? In
what ways do the technology we use daily for our benefit harm others?
One part of Ari Melenciano’s talk that particularly interested me was her discussion on how our general understanding of design stems from industrialization and capitalism, which are structurally supported by humanism, colonialism, and racism. Design choices across all fields of it, from industrial to user interface, are often motivated by what makes the most profit or generates the most “efficient” work. For example, as we have read from “How to Internet” back in week 3, the internet was filled with much more connection, creativity, freedom, and variety in its inception. Now, most websites we access seem to stem from a single template; there needs to be a hero section with a tagline to grab attention, a piece of text about business value…etc. From a blank canvas of infinite connection possibilities, the web quickly turned into a commercial and business space as advertisement spaces started to take over. In “The Role of Mass Media in U.S. Imperialism,” scholar Robert Chrisman discusses how even the most distinguished creative talent in the United States is not used for creativity’s sake or social change but to “create advertisement jingles and images.” What does it mean for creativity when it is being used to drive sales and increase productivity? When words like “human-centered design” are being thrown around with little to no meaning? We must be critical of the fact that the development of design itself has been inherently exclusionary and based on systems of oppression, and use Omni-Specialized Design to deconstruct and unlearn the design of the past.
Qs:
1. How do we start imagining better futures (futures without systems
of oppression), and what forces exist that prevent us from doing so?
2. How can creativity be used for social change? What events in the
past have shown so?
3. What can we, as university students, do to
start practicing ecologically-centered design?
Throughout Christine Sun Kim’s talk, I really enjoyed the connections that she drew between ASL and language as a whole to music. I think her points on language and music can be extended further to communication as a whole. As she mentions early in the talk, a musical note cannot be fully expressed on paper. Even with the many notations and articulations with their own meanings, sometimes written in full sentences like “Play cute yet frightening,” there is no way to completely translate what a composer had in their head to another person. Just as each interpreter gives Kim a unique voice and identity, one conductor may interpret a piece of music completely differently than another, even when all the written notations are the same. This ability/inability to communicate an idea, thought, or feeling exactly to another person stands true for whatever medium it may be. For example, the concept of ‘정' in Korean is difficult to explain in English, even when using multiple sentences. Does that mean the concept or the feeling of ‘정' does not exist? Even within the same language, this stands true. If you cannot describe what you felt after seeing a movie in words, does that mean your feeling is false?
Qs:
1. What are some ways we communicate, excluding written words and
sounds? How can they be notated and expressed to convey the same original
meaning?
2. Do you think the arts, or any type of creative
expression, has the ability to communicate in ways that traditional
languages can’t?
3. Are there ways to translate visual works to
auditory sounds and vice versa? Are there ways to translate audio-visual
works to tactile sensations?