1. It knows what you’re thinking

Whose on the other end spreading sedition – human or robot?

Ask Siri a question and she’s scrabbling around trying to unpick your mangled syntax. But Xiao i has no such problems. This AI device, exhibited in the ominously titled World Robot Conference in China recently, has sufficient data to understand the human brain and how that blob of jelly actually works.

Language is just one aspect of communication and Xiao i can add context and other analyses of how fleshbags work. Currently it lives in the form of software to be run on mobile phones and beat Siri’s voice recognition breakthrough by three years in 2004.

2. It knows that you’re up to no good

Is her ordering a pizza or bringing down Western capitalism?

In the coming months two stock exchange operators are launching AI tools that can intuitively pick up on subtle clues that points to stock manipulation and misbehaviour. It could, for example, pick out suspicious phrases through chat-room messages around the time of a big deal or learn the kind of deep-deep background behaviour that points to market manipulation.

Nasdaq and the London Stock Exchange Group expect it to be in place by the end of the year while US financial regulation authorities are also pitching into the market, according to Fortune.com .

3. It is learning how the courts work

Promobot V.3 assistant robot unveiled at the 2016 Open Innovations Forum at the Skolkovo Technopark. It has dead eyes

Judicial decisions of the European Court of Human Rights (ECtHR) have been predicted to 79% accuracy using an AI method developed by a number of researchers, including those at UCL .

Using a machine learning algorithm, the method analysed case texts and managed to pick out where the judges would land in most cases.

The researchers don’t believe AI could replace human judgment – that’s what they all say – but “it could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention on Human Rights,” said Dr Nikolaos Aletras, who led the study at UCL Computer Science.

4. It has eyes on your job

Toyota Motor Corp's Kirobo Mini robot – is this how the Takeover starts, with an "aaaah"

Human transcriptionists could be the first (albeit small) tranche of professionals who lose their jobs to AI. Microsoft researchers have developed a system of speech recognition that is just as accurate as their human counterparts. The Artificial Intelligence And Research Group have achieved a word error rate of just 5.9% – the same as flesh and blood listeners.

The data was taken over the phone and included two-way conversations. The system can be fooled though – with “uh” confused for both a pause and “uh-huh” meaning yes – so there will be a backdoor post Takeover.

5. It is learning how to learn… better

The Urobot by Xiao Yanlin at the WRC 2016 World Robot Conference in Beijing, China. It's laughing at us people.

AI has made extraordinary advances in learning in specific fields but, until now, has been unable to leap from effectively from one specialised area to another, taking the lessons from one.

This so-called “generalised learning” is, say scientist, one of the last great advantages humans have over software and keep us just ahead in the battle to run the planet (our inference).

Humans can take lessons from mountain climbing, say, and adapt them for cross country running. A computer has to learn each task anew. DeepMind, the London-based AI firm, have come up with Differential Neural Computer which uses external memory devices to store previous models and feed and inform new models.

“This new form of generalized learning could pave the way for an era of artificial intelligence the likes of which will strain the human imagination,” says the ExtremeTech website .

Is it thinking what you're thinking

Yikes. All together now “uh”, “uh-huh”, “uh”, “uh-huh”