www.booklink.in
Mary shelly wrote Frankenstein in 1818. George Orwell wrote Animal Farm in 1945, Nineteen Eighty Four in 1949. The underlying theme is the invisible Big Brother controlling you, one who may not even exist. From thence arises the idea of super-machines and in 1982, the Adrian Mole series. It is nothing unusual for today’s children to see machines (robots and other beings) talk on the television screen.
In 2009, the UK paper The Telegraph reported that the European Union was spending millions of pounds developing ‘Orwellian’ technologies designed to scour the internet and CCTV images for ‘abnormal behaviour’. A five-year research programme, called Project Indect, aimed to develop computer programmes which act as ‘agents’ to monitor and process information from web sites, discussion forums, file servers, peer-to-peer networks and even individual computers. Life may be very difficult 10 years down the line as humans get marginalised, surprisingly with their own creations. Machine intelligence is here, and we’re already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control.
In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don’t fit human error patterns — and in ways we won’t expect or be prepared for. ‘We cannot outsource our responsibilities to machines’, she says. ‘We must hold on ever tighter to human values and human ethics.’ She says, Machine intelligence makes human morals more important. Tufekci says this in a TED talk.

Here are exerpts from the talk: I started my first job as a computer programmer in my very first year of college -- basically, as a teenager. Soon after I started working, writing software in a company, a manager who worked at the company came down to where I was, and he whispered to me, ‘Can he tell if I’m lying?’ There was nobody else in the room. He was pointing to the computer! Nowadays, there are computational systems that can suss out emotional states and even lying from processing human faces. Advertisers and even governments are very interested. I had become a computer programmer because I was one of those kids crazy about math and science. But somewhere along the line I’d learned about nuclear weapons, and I’d gotten really concerned with the ethics of science. I was troubled. However, because of family circumstances, I also needed to start working as soon as possible. So I thought to myself, hey, let me pick a technical field where I can get a job easily and where I don’t have to deal with any troublesome questions of ethics. So I picked computers.
Nowadays, computer scientists are building platforms that control what a billion people see every day. They’re developing cars that could decide who to run over. They’re even building machines, weapons, that might kill human beings in war. It’s ethics all the way down.

We’re now using computation to make all sorts of decisions, but also new kinds of decisions. We’re asking questions to computation that have no single right answers, that are subjective and open-ended and value-laden.
We’ve been using computers for a while, but this is different. This is a historical twist, because we cannot anchor computation for such subjective decisions the way we can anchor computation for flying airplanes, building bridges, going to the moon. Are airplanes safer? Did the bridge sway and fall? There, we have agreed-upon, fairly clear benchmarks, and we have laws of nature to guide us. We have no such anchors and benchmarks for decisions in messy human affairs.
To make things more complicated, our software is getting more powerful, but it’s also getting less transparent and more complex. Recently, in the past decade, complex algorithms have made great strides. They can recognise human faces. They can decipher handwriting. They can detect credit card fraud and block spam and they can translate between languages. They can detect tumors in medical imaging. They can beat humans in chess and Go.
Much of this progress comes from a method called ‘machine learning’. The downside is, we don’t really understand what the system learned. In fact, that’s its power. This is less like giving instructions to a computer; it’s more like training a puppy-machine-creature we don’t really understand or control. So this is our problem.
It’s a problem when this artificial intelligence system gets things wrong. It’s also a problem when it gets things right, because we don’t even know which is which when it’s a subjective problem. We don’t know what this thing is thinking. (Listen to the talk for more details)