Enlaces 2017//1//24 :

Youtube: chicos daneses drogándose con fines educativos // Youtube: huevos hirviendo sin cocinarse // Youtube: pez dorado en intrógeno líquido, volviendo a la vida (más) // Youtube: persona mojándose con nitrógeno líquido // Canal de Youtube: gente metiendo cosas en una cámara de vacio // Wikipedia: la correlación no implica causalidad //

Anuncios

Objectifier empowers people to train objects in their daily environment to respond to their unique behaviors. It gives an experience of training an artificial intelligence; a shift from a passive consumer to an active, playful director of domestic technology. Interacting with Objectifier is much like training a dog – you teach it only what you want it to care about. Just like a dog, it sees and understands its environment.

With computer vision and a neural network, complex behaviours are associated with your command. For example, you might want to turn on your radio with your favorite dance move. Connect your radio to the Objectifier and use the training app to show it when the radio should turn on. In this way, people will be able to experience new interactive ways to control objects, building a creative relationship with technology without any programming knowledge.

Puntos destacados de… DATA STREAMS por HITO STEYERL AND KATE CRAWFORD // via TheNewInquiry // November 7, 2016, a partir de una conversación de Skype, parte 1.

KATE CRAWFORD. There are these hard limits that are reached in the epistemology of “Collect it all” where we reach a breakdown of meaning, a profusion and granularization of information to the point of being incomprehensible, of being in an ocean of potential interpretations and predictions. Once correlations become infinite, it’s difficult for them to remain moored in any kind of sense of the real. And it’s interesting how, for both of us, that presents a counter-narrative to the current discourse of the all-seeing, all-knowing state apparatus. That apparatus is actually struggling with its own profusion of data and prediction. We know that there are these black holes, these sort of moments of irrationality, and moments of information collapse.

KATE CRAWFORD. (…) the thing that got me through were these moments of humor. It’s very dark humor, but in the archive there are so many moments of this type. Some of the slides in particular are written in this kind of hyper-masculinist, hyper-competitive tone that I began to personalize as “the SIGINT Bro.”

KATE CRAWFORD. The other thing that I would love to talk to you about–and this is switching from the state to corporate uses of data, because I know both you and I are interested in how those two are really merging in particular ways–is IBM’s terrorism scoring project (…). I know we are both interested in how this type of prediction is a microcosm of a much wider propensity to score humans as part of a super-pattern.

HITO STEYERL. I’m really fascinated by quantifying social interaction and this idea of abstracting every kind of social interaction by citizens or human beings into just a single number; this could be a threat score, it could be a credit score, it could be an artist ranking score, which is something I’m subjected to all the time. For example, there was an amazing text about ranking participation in jihadi forums, but the most interesting example I found recently was the Chinese sincerity social score. I’m sure you heard about it, right? This is a sort of citizen “super score,” which cross-references credit data and financial interactions, not only in terms of quantity or turnover, but also in terms of quality, meaning that the exact purchases are looked into. In the words of the developer, someone who buys diapers will get more credit points than someone who spends money on video games because the first person is supposed to be socially “more reliable.” Then, health data goes into the score–along with your driving record, and also your online interactions. Basically it takes a quite substantial picture of your social interactions and abstracts it into just one number. This is the number of your “social sincerity.” It’s not implemented yet–there are some precursors in the form of extended credit scores which are already being rolled out–but it is supposed to be implemented in 2020, which is not that long from now. I’m completely fascinated by that.

KATE CRAWFORD. When I think about the Chinese citizen credit score is that here, in the West, it gets vilified as a sort of extremist position, like, “Who would possibly create something so clearly prone to error? And so clearly fascist in its construction?” [DE TODOS MODO, DEJAMOS LA POLÍTICA EN MANOS DE ESTE TIPO DE SISTEMAS MATEMATIZABLES] Yet, having said that, only last week we saw that an insurance company in the UK, the Admiral Group, was trying to market an app that would offer people either a discount on their car insurance or an increase in their premium based on the type of things they write on Facebook.

As for the IBM terrorist credit score, it’s being tested and deployed on a very vulnerable population that has absolutely no awareness that it is actually being used against them; also, it’s drawing upon these terribly weak correlations from sources like Twitter (…), it’s critically important that we question these knowledge claims at every level.

HITO STEYERL. (…) we are kind of back in the era of crude psychologisms, trying to attribute social, mental, or social-slash-mental illnesses or deficiencies with frankly absurd and unscientific markers.

KATE CRAWFORD. (…) what we now have is a new system called Faception that has been trained on millions of images. It says it can predict somebody’s intelligence and also the likelihood that they will be a criminal based on their face shape. Similarly, a deeply suspect paper was just released that claims to do automated inferences of criminality based on photographs of people’s faces. (…) Phrenology and physiognomy are being resuscitated, but encoded in facial recognition and machine learning.

(…) we’re seeing these historical returns to forms of knowledge that we’ve previously thought were, at the very least, unscientific, and, at the worst, genuinely dangerous.

HITO STEYERL. I think that maybe the source of this is a paradigm shift in the methodology. As far as I understand it, statistics have moved from constructing models and trying to test them using empirical data to just using the data and letting the patterns emerge somehow from the data. This is a methodology based on correlation. They keep repeating that correlation replaces causation. But correlation is entirely based on identifying surface patterns, right? The questions–why are they arising? why do they look the way they look?–are secondary now. If something just looks like something else, then it is with a certain probability identified as this “something else,” regardless of whether it is really the “something else” or not. Looking like something has become a sort of identity relation, and this is precisely how racism works. It doesn’t ask about the people in any other way than the way they look. It is a surface identification, and I’m really surprised how no one questions these correlationist models of extracting patterns on that basis. [The] IBM’s Hollerith machines, (…) were used in facilitating deportations during the Holocaust. This is why I’m always extremely suspicious of any kind of precise ethnic identification.

HITO STEYERL. There is a danger that if one tries to argue for more precise recognition or for more realistic training sets, the positive identification rate will actually increase, and I don’t really think that’s a good idea.

KATE CRAWFORD. Google has so much information (…) but that connection between its enormous seas of data and actually connecting that to instrumentalize the knowledge is still very weak.

[If] you are currently misrecognized by a system, it can mean that you don’t get access to housing, you don’t get access to credit, you don’t get released from jail. So you want this recognition, but, at the same time, the more the systems have accurate training data and the more they have deeper historical knowledge of you, the more you are profoundly captured within these systems.

We are being seen with ever greater resolution, but the systems around us are increasingly disappearing into the background.

KATE CRAWFORD. The narrative that’s being driven by Silicon Valley is that the biggest threat from AI is going to be the creation of a superintelligence that will dominate and subjugate humanity. (…) But to everybody else, those threats are already here. We are already living with systems that are subjugating human labor and particular subsets of the human population in ways that are harsher than others.

[One] of the things that is going to happen in the US is the complete automation of trucking. Now, trucking is one of the top employers in the entire country, so we’re looking at the decimation of a dominant job market.

HITO STEYERL. As people get replaced by systems, one of the few human jobs that seems to remain is security.

KATE CRAWFORD. I often think about this concept of solidarity in a world where so many of these stacks that overlay everyday interactions are trying to individualize and hyper-monetize and atomize not just individuals, but every sort of interaction. Every swipe, every input that we make, is being categorized and tracked. The idea, then, of solidarity across sectors, across difference, feels so powerful because it feels so unattainable.

HITO STEYERL. Have you seen any example of an AI that was focused on empathy or solidarity? Do you see the idea of comradeship anywhere in there? KATE CRAWFORD. ELIZA is the most simple system there is. She is by no means a real AI and she’s not even adapting in those conversations, but there’s something so simple about having an entity ‘listen’ and just pose your statements back to you as questions. (…) ELIZA as an empathy-producing machine because she was a simple listener. She wasn’t trying to be more intelligent than her interlocutors, she was just trying to listen, and that was actually very powerful.

20:04 16/01/2017
Robert Epstein:

My favourite example of the dramatic difference between the IP perspective and what some now call the ‘anti-representational’ view of human functioning involves two different ways of explaining how a baseball player manages to catch a fly ball – beautifully explicated by Michael McBeath, now at Arizona State University, and his colleagues in a 1995 paper in Science. The IP perspective requires the player to formulate an estimate of various initial conditions of the ball’s flight – the force of the impact, the angle of the trajectory, that kind of thing – then to create and analyse an internal model of the path along which the ball will likely move, then to use that model to guide and adjust motor movements continuously in time in order to intercept the ball.
That is all well and good if we functioned as computers do, but McBeath and his colleagues gave a simpler account: to catch the ball, the player simply needs to keep moving in a way that keeps the ball in a constant visual relationship with respect to home plate and the surrounding scenery (technically, in a ‘linear optical trajectory’). This might sound complicated, but it is actually incredibly simple, and completely free of computations, representations and algorithms.
(…)
Worse still, even if we had the ability to take a snapshot of all of the brain’s 86 billion neurons and then to simulate the state of those neurons in a computer, that vast pattern would mean nothing outside the body of the brain that produced it. This is perhaps the most egregious way in which the IP metaphor has distorted our thinking about human functioning. Whereas computers do store exact copies of data – copies that can persist unchanged for long periods of time, even if the power has been turned off – the brain maintains our intellect only as long as it remains alive. There is no on-off switch. Either the brain keeps functioning, or we disappear. What’s more, as the neurobiologist Steven Rose pointed out in The Future of the Brain (2005), a snapshot of the brain’s current state might also be meaningless unless we knew the entire life history of that brain’s owner – perhaps even about the social context in which he or she was raised.

https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer
Blog sobre la postura antirepresentacional: http://psychsciencenotes.blogspot.com/p/about-us.html
Revisar los estudios sobre la memoria, por todo el asunto de qué tan posible es descargar la memoria a una base de datos externa.

Inteligencia Real

19:50 16/01/2017

In Our Own Image (2015), the artificial intelligence expert George Zarkadakis describes six different metaphors people have employed over the past 2,000 years to try to explain human intelligence.
1. That spirit ‘explained’ our intelligence – grammatically, at least.
2. hydraulic model of human intelligence, the idea that the flow of different fluids in the body – the ‘humours’ – accounted for both our physical and mental functioning.
3. automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines.
In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain.
4. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence – again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph.
5. The mathematician John von Neumann stated flatly that the function of the human nervous system is ‘prima facie digital’, drawing parallel after parallel between the components of the computing machines of the day and the components of the human brain

2:17 24/11/2016
Mati me había pasado un video sobre redes neuronales evolucionando a medida que aprenden a jugar al Mario. Es vinculable con Moretti, Bergson y todos los que expliquen las lineas de evolución. Incluso la concretización simondoniana: la maquina de información también aprende de la experiencia. Filosofía de la información es una teoría más general que la genética, la ingeniería y la neurociencia.