top of page

D4TC: AI

  • Writer: Milan Gary
    Milan Gary
  • Nov 1, 2017
  • 2 min read

'Since a computer is a universal manipulator of symbols, the reasoning went, and since mental life is involved in the manipulation of symbols, a computer could be constructed that had a mental life.' -John Menick

google images

So this weeks readings focused on AI, Artificial Intelligence. These articles touch upon the use of AI when referring to languages, music, imagery, and game playing. There's this saying that 'our brains are like computers' , when creating AI shouldn't we reverse this saying and apply that when creating an AI?

I love reading about how an AI / computer learning system goes up against a human in a game of chess or Go. I used to not understand why it would always be these types of games that truly tested the AI, but now it's as clear as day. Games like chess and go are full of strategy, risk, creativity, and anitcipation. What's really fascinating is how there's this risk factor that goes into games like these. Sacrifices are made, and I don't think that an AI can develop or understand this concept. Will we get to a point where AI's have intuition? If so, what might that look like for world.

'Images have begun to intervene in everyday life, their functions changing from representation and mediation, to activations, operations, and enforcement. Invisible images are actively watching us, poking and prodding, guiding our movements, inflicting pain and inducing pleasure. But all of this is hard to see.' -Trevor Paglen

This article introduced a WILD concept. This machine-machine vision system is giving images the power to see us. The freedom of self-representation is being weakened by creating the face reading machines. This sort of technology is slapping an immediate label onto you. This sort of machine vision tech can be seen in surveillance cameras, bank car readers, facial recognition tech, ect. The most terrifying thing about machine-machine vision is that it's invisible / not obvious, but it's everywhere. This tech is so far along in development that now humans are questioning how we can understand/detect this machine-machine visual culture. From the quote above Paglen suggests that in order to understand machine visual culture we must 'unlearn how to see like human'. Once we unlearn this are we still considered to be human?

'The point here is that if we want to understand the invisible world of machine-machine visual culture, we need to unlearn how to see like humans.' -Trevor Paglen

------

(My thoughts on this topic)

Comments


bottom of page