The dangers of a canny valley.

The dangers of a canny valley.

AI/Machine learning assisted actors, voices, story points, and the proposed effects of perfect content.

If you recall the early days of Amazon Originals, you remember odd pairings of actors and show plots. (Beloved Actor) is a (Consumer Relatable Job Title) who is (Random Verb)-ed by a (Popular Product Feature)-ing (Random Noun). A few years later, and Netflix picked up on the idea, using user data to help generate more marketable ideas. As we get better at parsing and humanizing big-data, we may finally have enough typewriters and robots to recreate Shakespear--thus fulfilling the prophecy of machine learning.


Google claims their voice assistant will soon be able to mimic true human speech. This means that we will soon have podcasts where both the hosts and guests will be artificial. While the author(s) of the podcasts will remain human (for now), these writers will now have the ability to give their stories professional voices for possibly free. If you use any professional scriptwriting software, you are probably familiar with a similar (if not robotic) feature.


CGI humans are horriffic abominations, but only when they are animated. Stills of artificially generated humans do quite well as influencers and will fool a surprising percentage of VFX artists. Moore's law won't let the animated versions appear artificial for long. Either AI or animators will figure it out, and then all bets are off. Most movies we watch now are technically cartoons with human actors being no more than greenscreen homunculi. James Cameron's Avatar is an amazing cartoon where cats fight robots, would it have mattered if the 12% of the movie that wasn't CGI was in CGI? Maybe? Will it in 10 years? No.

One more step...

With the additions of cloud rendering, geo-targeting, and subscriber modeling, we have a world where AI can create a firehose full of custom content aimed at the user and custom-tailored to give them exactly what they want. What my youth minister explained to me as his idea of God. He isn't wrong. Looking at user patterns at DIRECTV one thing that was obvious was "people build bubbles". We all have a set of truths that we build our world from, and we choose the content (and friends) that reinforce those truths. So a cell phone that continually feeds us truths that fit our bubble will define our reality. (I'll get deeper into Geo-targeting when I talk about the death and resurrection of the commercial in another post.)

Is the bubble good?

If you have ever moved out of your home town and come back to visit and you realize that everything has changed? It's because you have added new truths to your bubble, your world view has been changed more than the views of those back home. You needed to adapt to a new environment. When you find yourself fighting with your family at Thanksgiving, blame Darwin.

Is MY bubble good?

I'm predicting a future where your entertainment is customized to reinforce your world view. Consider "the Fox effect", where one person in a family becomes obsessed with the nonstop endorphin rush of Fox News and is eventually excommunicated. (There are some fascinating studies on this) It is an extreme version of FOMO, where the person feels as if they stop monitoring the news they aren't helping fight the next great evil. Their participation is mandatory as they become addicted to fear. Their bubble becomes a snake that eats itself. That is the danger of bespoke entertainment--if it can reinforce our truths, what is stopping it from subtly changing our truths? Could we develop the tools to know the difference?

"Death to Videodrome, all hail the new flesh." -Videodrome