on 12-06-2018 19:29
on 12-06-2018 19:29
Apparently released to show what can happen if exposed to the wrong type of input.
Scary stuff.
on 13-06-2018 09:14
I had not seen this on the news yet, quite scary indeed ...
It's a really interesting read though @Anonymous, thanks for sharing it!
→ Access for You: Registration - Find out how to register for our Access for You service.
→ Want to chat with other fellow-minded members? Head to our Off-topic section for some interesting chit-chat.
→ Check out our Priority board for tickets & offers updates, and to discuss all things Priority-related!
→ Welcome to O2! - New to O2? Find out all you need to know to get started!
If you'd like to take part, why not register?
13-06-2018 09:25 - edited 13-06-2018 09:26
13-06-2018 09:25 - edited 13-06-2018 09:26
I just thought it was extremely frightening. They should just stop meddling...
I am an avid watcher of Humans on Ch 4 and although I love the series, it always makes me wonder 'what if'...
Veritas Numquam Perit
on 13-06-2018 14:38
on 13-06-2018 14:38
Isn't science and technology wonderful. I always worry that hackers could cause mayhem!
on 13-06-2018 19:58
Life really is imitating art in the technology field these days.
In response to the security breaches we hear about the big networking hardware and security software and hardware vendors are touting AI and Machine Learning as the cure all and the services that you use on the 'net runs on infrastructure that is designed to shift services (or workloads as they are referred to in IT) between data centres automatically in the event of an issue (fire, flood, electrical problem or human error) without the end user even knowing there was an issue.
Sound familiar?
In pretty much any sci-fi any 'bad' system is portrayed to be built so that it keeps running irrespective of whatever happens and often uses anything at it's disposal to prevent the humans who are trying to shut it down.
The vendors talk about internal as well as external threats for example the employee that has just been made redundant and sell their product as being able to protect against that as well (good system design dictates that no one individual ought to be able to bring down the whole lot but how much decentralisation you can achieve does of course depend on how many staff you want to employ).
Here's where it can turn into a scenario from the films and why I am concerned about putting AI in charge of system security.
You might have a need to forcibly take a system offline if it's not working in the desired way but assuming the system has been trained to be wary of any internal or external attempts to stop it's operation you can easily end up with a scenario depicted in the films, the effect could be benign or it could cause absolute chaos.