This is for all the weary lovers, likers, flamers, friends, families, couples, loners, et al…

That reality isn’t far away. With machine learning and massive datasets, conversational Artificial Intelligence routines are now able to study your personal vocabulary and respond for you in ways you statistically would. Through the same iterative process they use to recognise they’re misreading “sleeping with” every time you mean to swype “speaking with”, the AI onboard your devices can easily go from

But before we can achieve total and imperceivable automation, there are many human inconsistencies to consider. The algorithms behind predictive text engines base their recommendations on your past behaviour and simply give you more of the same, cutting variety out of the equation. They do not yet program for the imprecise construct of the persona and for things like creativity, context and the messy structuring of most relationships. They certainly cannot pick up on the non-verbal cues which have become ever harder to see and read. So while it is quite possible for a routine to faithfully reproduce tone and vocabulary, and would be hard but not impossible for it to manifest originality, you’d be hoping for a virtual miracle to see a machine intuit the intent and machinations of the human heart.

Without considering competency, the burning question is just how far we WANT natural language processing systems to go with this assignment. Right now, we rely on data-aided prompts simply to improve productivity [and perhaps enhance creativity] but human evolution has turned lazy, chaotic and will not be governed by the rigid task parameters of the programs that serve us. Ensnared in a morass of uncivil and antisocial online behaviour, will we begin to see autocorrected or even autocomposed conversation as a sanitised and unimpeachable grand convenience to overcome the volatility of our own instinctual responses? What fragile faculties and spontaneous agency do we stand to lose in the process? Will this reinforce a templated call-and-response expectation of communication exchange and how will we deal with unexpected feedback? Will we cede to ‘convenience uber alles’ or remain willing or even able to switch back and forth off autopilot? And, given how vital interpersonal communication is to our psyches, isn’t it more than conceivable this will allow the recommendation engines to go quickly from dictating how we should respond to telling us what to think next and even how to feel?
We do know that the stable door on the AI farm has been left open way past late and that our greatest minds [including technophiles like Stephen Hawking, Elon Musk, Steve Wozniak and Nick Bostrom] have sounded the alarm, calling for urgent reparations. As we witness the gradual and willing accommodation of the machine into all areas of our lives, we realize that we may be losing sight of the boundaries that separate us. Microsoft recently acknowledged the terrifying capability of the GPT3 universal language generating system [which can process 175 billion machine learning parameters] by licensing its source code in order to lock it away. Select testers and partners can still use the public API to receive output and the results are giving everyone the chills – fears include what it might mean for “misinformation, spam, phishing, abuse of legal and governmental processes, fraudulent academic essay writing and social engineering pretexting“. In a demonstration of the ’emergent’ power of the program, The Guardian newspaper asked it to compose an essay on why AI didn’t pose any threat to humanity. The remarkably lucid output shows it clearly doesn’t have a setting for paranoia — yet.
To sound out just how close we might be to deploying our own solipsistic chatbot twins, we’d like your help to test [and just briefly mess with] one of the more widely used communication-assistance algorithms
YOU AUTOCOMPLETE ME, the game
Get to know the AI that knows you!
How to play:
- This game asks you to build a conversation using the Gmail smart compose autofill option, which currently suggests responses at the bottom of the emails you receive
- Using a browser, log in to your Gmail account. Check your Google general settings on the top right corner and scroll midway down the options to ensure ‘writing suggestions’, and ‘personalization’ in the Smart Compose section, as well as ‘smart reply’ in the Smart Reply section are set to ON
- Send an email message [hopefully with a provocation] to someone you communicate with regularly [or haven’t spoken to in forever]
- Contact them separately, and ask them to respond to your email ONLY by selecting one of the three generic [and yet personalized] reply options offered to them by their friendly [and only slightly creepy] AI and then hitting ‘send’
- Similarly, respond to their response ONLY with one of the three options offered to you under their response
- Carry on the thread until you see a poem, pattern or resolution
- Stop whenever you want – and then forward the conversation to evolove
And that’s it. A simple show-and-tell exercise to reveal the state of your cybernetic union. We’re not expecting deep poetry, great art or even meaningful exchange but who knows, maybe your AI is smarter than mine….
We know that it has already read this piece.
Just saw this!
Ok very interesting. What about Engleman saying that wouldn’t it be far more interesting to connect the AI with our brains so that the data is streaming through us rather than us having to view and then process it? He argues that then the mind can process and intuit things like – will it rain tomorrow? Or what we think the stock market will do. That makes me very uncomfortable because it is giving away the keys to the kingdom. All in the name of efficiency ans inevitability…
Sent from my iPhone
>
LikeLike