Video teaser for possible 2015 predictions topics

teaser video mont serein

Video preview of some predictions topics

This is obviously not an official video. And some or all of these topics might not make it to the final report to be published in January 2015.

But the hours I spend hiking ARE an integral part of how I build and write my predictions. Online research and client meetings are important too, but long blocks of uninterrupted time are crucial to how I write. Plus, discussing the topics with Barbara is a great help: she asks questions, spots holes in the logic, and I can start getting a sense of which topics will excite broader audiences.

We were hiking Mont Ventoux on the GR9 yesterday from Brantes. About 30 km round trip, and with 1400 meters of vertical. We went straight to the summit, but it is too windy to picnic up there, and we stopped for lunch at the camping area near Mont Serein.

Special thanks to Miss Barbara Stewart, who was my videographer extraordinaire.

2014-08-18 15.45.39

Not just that, but since she is also the person I talk to about my predictions topics while hiking, I guess she is my muse! In Greek legend, there are nine Muses, each with her own specialty: Clio (History), Urania (Astronomy), Melpomene (Tragedy), Thalia (Comedy), Terpsichore (Dance), Calliope (Epic Poetry), Erato (Love Poetry), Polyhymnia (Songs to the Gods), Euterpe (Lyric Poetry).

Hmmm. None for predictions. Therefore I will invent a tenth muse for Barbara: Callipygia!

(I further predict that almost no one will get that joke.)

Duncan’s Guide to Hiking in France

Cougoir Video

 

Mountaintop video

For years I have driven around France and literally hundreds of times I saw clusters of cars parked by the side of a road near a pole with small yellow signs on it. “What were they doing?”

Rando.

France is honeycombed with trails for hikers (randonneurs.) Some are famous trails in the Alps and the Pyrenees, but even near my favourite town of Nyons, there are 30-40 trails within a short drive of our hotel. Since our first time trying a single trail behind our hotel six years ago, we now return to spend at least a week and up to 15 days hiking in the area. Why?

It’s a wonderful workout, and we get to have dessert every night with dinner! The view of the French countryside from the trails is very different from a car, or even an after dinner walk down a local lane. Our sense of French vineyards, lavender fields, geography and “feel” for the land is so much sharper today. Finally, we have busy jobs meeting thousands of people per year, so a chance to walk for 3-6 hours and meet only a handful of fellow hikers (and a brief “Bonjour” is the extent of any conversations) is perfect quiet couple’s vacation for us.

People used to hiking in Canada may be daunted by some scary experiences in the Rockies (“Did you know that if you fall into a crevasse you will die of hypothermia before we can rescue you?” etc.) Those hikes can be intimidating, so let’s talk about how you can get started hiking in France.

Trails and Trailheads

The first thing you need is a Carte de Randonée from IGN. They are for sale in any local Maison de la Presse or tobacconist, and most hypermarchés. Or you can buy them online!

3139otb

If you are even mediocre at reading a map, these are dead easy to use. You find your peak (Cougoir, in this case) and locate the trailhead starting at the Vieux Village of Teyssières. Google Maps says that is about 24 km from Nyons, and gives you all the roads. That will take you at least 40 minutes if you aren’t insane. These roads are one lane wide, twisty, and if you can make 40 km/h you are a better driver than I am. Please be careful – lots of trucks, cars, and cyclists.

Park your car at the pole with the little yellow signs (there is always room) and head up the trail. Trails in France are BRILLIANTLY marked compared to Canada or the USA. They use a code of blazes (white, red and yellow paint) on trees and rocks to indicate the correct trail, wrong trails, even the direction of curve up ahead. Follow the horizontal yellow over red markings (for this trail) and avoid the yellow crossed with red (like an ‘X’) and you will be fine. The trails tend to be wide and well maintained.

Gear

Which leads to the great part! You need running shoes, not hiking boots. No snow, no streams, no swamps. Pair of shorts, t-shirt, hiking socks (blisters are the enemy) and a hat if your hairline resembles mine. Lots of sunscreen. We use two packs: a CamelBak filled with ice cubes and water back at the hotel, and a bigger backpack for fleeces, extra water and a tarp. We carry a liter of water per hour for the two of us, unless it is unusually hot. Always bring a fleece: clouds can cover the mountain summit at any time, and the temperature can go from 28C to 13C in 30 minutes or less. If you don’t need to wear the fleeces, they make excellent pillows for lunch-time naps. No bug repellent: is it too dry for mosquitoes. No bear spray: there are no bears! There are poisonous vipers in France, but I have never seen one.

Finally, if you are skittish about heights, most of these trails are hundreds of years old, and were used by kids and seniors to get to market. You will almost never have to scramble, to put your hands down for support, or walk along a sheer cliff edge (unless you want to.)

Lunch!

We make our wraps in the hotel in the morning, and they keep cool in the CamelBak. We time our hike so that we are at or near the summit between noon and 2 pm (depending on how much we pigged out the night before!) We find a spot with a nice view, and facing south-ish with a gentle 5-10 degree slope. We lay down our trusty ten year old blue tarpaulin (6’ x 8’; $10 at Canadian Tire) flat, stick our backpacks by our heads as pillows, and eat lunch, with a 30 minute snooze in the sun for after. We have NEVER had a better lunch in France than these picnics on mountaintops.

Exertion, time, and steepness

Most people know Barbara and I are fairly fit. And we have done some crazy long hikes at brutal paces. But we don’t do that every day, or we would get injured. Cougoir is a rest day kind of hike – we get our legs moving, but it is not a huge exertion. The trailhead is at 650 m altitude, and the summit is 1221 m. According to the signposts, it is 7.5 km each way. There are two longish flat bits (maybe 1.5 km) so you can see that the vertical of just under 600 m is accomplished over 6 km, or about a 10% grade. That isn’t flat, but as a comparison the Grouse Grind is 31%.

Most healthy adults walk about 4-6 km per hour on the flat. Adding 600 meters or nearly 2000 feet of vertical will slow down most people, but I would be surprised if anyone reading this took more than 2.5 hours on the way up, and 2 hours down. Without pushing ourselves, Barbara and I got up and down in 3h40m, and that was with 40 minutes up top for lunch and sunbathing.

Bonne route!

Let’s Test Everyone: broader clinical trials are harder than you think

clinical-trials-banner

Yesterday on Facebook I linked to an article about the new class of anticoagulant medicines such as Pradaxa, and the current controversy around them. One issue is that these drugs are mainly being prescribed for older and/or sicker patients, but were probably tested on primarily young and healthy volunteers. However:

“In the case of Pradaxa trials, we know elderly people and people with renal disease were relatively under represented when compared with those who were prescribed Pradaxa when it was released onto the market. This is important because those excluded are the very people who are most likely to develop bleeding complications.”

My Facebook friend Maryana Simonovich raised the reasonable idea that “FDA should advocate less selective inclusion criteria for those trials, to get a more realistic clinical outcomes.”

That seems like a good idea, but in my experience (on the board of biotech companies as an investor) Maryana is wrong. But the reasons WHY she is wrong are non-obvious, interesting, and worth a much fuller exploration.

[edited to add: Maryana isn't WRONG, of course. In an ideal world the broadest possible inclusion criteria would be ideal. The points below are just a look at why the potential costs of broader inclusion criteria may outweigh the benefits.]

Why not just test the drugs on the patients most likely to take the drugs?

  • Clinical trials are powered to show statistically significant differences between the various arms: one group gets the drug being tested, and the other either gets a placebo or the current standard of care (such as warfarin, in the case of anti-coagulants.) The probable difference in efficacy and adverse events between the two arms is almost always small, so you need a large number of patients to see meaningful differences in the few months that the testing will occur. For something like Pradaxa, a Phase III FDA trial will have 500-2,000 patients enrolled. If the only endpoint I am looking at is “does the drug reduce clotting safely?” then I stand a good chance of having a successful trial. But once I start asking questions like “Does the drug reduce clotting safely for men and women, black and white, young and old, those with kidney troubles, those with history of hemorrhagic stroke, etc?” I need to dramatically increase the size of my trial: 10,000 patients might not be enough. Enrolling 10,000 patients (more on this in a second) will take longer, and cost much more money. And before you talk about drug companies only being interested in profits, remember that they are controlled by shareholders who do worry about the ROI for clinical trials. Depending on the disease, more than half of all Phase III trials fail, so demanding that all new drugs be tested across the full range of possible patients means that many fewer drugs (especially for rarer conditions) will be tested going forward. That’s not good.
  • Drug trials have very clear rules. Not only are we looking for drugs to stop clotting, we want to make sure they are safe. Those running the trial will be looking for signs of excessive bleeding, bruising, rashes, headaches, plus the usual stuff all medicines need to worry about (sleepiness, wakefulness, nausea, diarrhea, and so on and so on.) But they will be ESPECIALLY vigilant for serious adverse events: stroke, other fatal bleeding and death. A single death (of uncertain cause) can mean that the entire trial is halted. The investigators need to study that death and figure out if the drug is killing people. It can take weeks or even months, and sometimes the entire trial needs to be restarted. In a trial of 1,000 young healthy volunteers, the odds of a subject dying unexpectedly are very low. But once we add older, sicker patients to our investigation, the laws of probability start working against us. Patient #749 dies after two weeks of the drug. They were in their 70s, and had been on dialysis for ten years now. The doctors are pretty sure the new drug has nothing to do with the death, but they can’t be sure. Once again, the cost of drug development will go much higher, and drugs will take longer to get to market. Not good.
  • It’s different for pancreatic cancer. Getting 73 year olds to enroll in new and potentially risky therapies when they are already seriously ill tends not to be a problem. You can recruit quickly, as long as your disease isn’t too rare. But getting older, sicker patients to sign up for a drug trial that is NOT lifesaving is much harder. Not only are the patients worried the drug might be unsafe, but the whole clinical trial process is a big use of time and can be a pain in the ass. Getting young healthy volunteers (who are often under employed and looking for a bit of money) is tough enough. So many trials are delayed months trying to find a thousand people willing to risk their lives for a few hundred bucks. Once you start trying to study the sick and elderly as well, I can guarantee that enrollment will take at least twice as long, perhaps more. Drugs already take many years to come to market. Forcing drug-makers to broaden the inclusion criteria would unquestionably delay many drugs. Don’t get me wrong: we would know they were safer drugs, but the number of people who die waiting for the better drugs could well be larger than the number of lives saved due to safer drugs.

This post isn’t as well hyperlinked as most, but I am on vacation, so figure any blog post is better than none at all!

[As always, I am making no comments or statements about the safety or dangers of Pradaxa, or any other drug. Or its manufacturer. This is just a blog post exploring some of the issues around inclusion criteria for clinical trials, from the perspective of someone who used to work in the field. This has nothing to do with what I do at Deloitte in the tech, media and telecom areas. All opinions are strictly my own and not Deloitte’s.]

How do I pick a dress for my wife?

2014-03-21 10.31.01

Last night I had two female friends over for dinner, while Barbara was out as a judge for a Business Woman of the Year Award. When she came home, my friends oohed and aahed over the dress she was wearing, and Barbara told them that I had found it for her. One of my friends asked me later: “How do you figure out what dress would look good?”

I am sure that not everyone would care, but I do have a methodology, and I thought I would take the time to share. Many people won’t read it, many more won’t finish…and at least a few women will print this out and staple it to their husband’s foreheads!

Getting Started

First, I have developed a sense of what colours in general work for Barbara. As a natural blonde with a few strands of silver in her hair, blue-green eyes, and usually a bit of suntan from hiking or running outdoors, there is a whole palette of colours that look good on her. And some that make her look like she just got out of prison! I almost always use colour as my first filter.

Next, there are certain cuts that don’t work: she has a long torso, and looks hideous in anything with an Empire waist. Because she is so slender, many dresses look great, but certain lower necklines can make her sternum look bony. I think a 52 year old woman with the best legs in the world has nothing to fear from a mini-skirt! But some dresses can have features that are too girlish, and inappropriate for someone Barbara’s age. Pussy bows. Pouf sleeves.

Since this post is intended to be read by men, I can picture someone not wanting to learn about pussy bows and all the other nomenclature. Fine: you don’t need to know a single one of those terms. The basics can be found here, and as long as you think of any dress as being composed of these 5-6 elements, you are good to go.

Often I just buy a dress because I think it will look nice. But sometimes Barbara needs one for a specific event, and that usually informs my thinking and helps me find the ‘perfect’ outfit.

Case Study #1: Reykjavik Speech

This is about the dress Barbara was wearing the other night. B had been asked to give a keynote speech in Reykjavik in March, to about 250 women at a financial literacy event. Barbara has always said that nothing boosts her speaking confidence like wearing a new dress: she feels sexy and powerful and energized. The picture at the top shows her being interviewed by the Icelandic State broadcaster in the new Harpa Concert Hall.

Not only does Barbara look good in that specific blue-green shade, but the colour makes me thinks of the sea. Not the Mediterranean blue, but a kind of North Atlantic on a sunny but cool day. Next, the white semicircles reminded me of whitecaps, and the seas around Iceland are always whitecapped due to the constant wind. It felt very nautical, and just a bit resonant with the natural surroundings.

Finally, the dress has hundreds of tiny cutouts, so that you can actually see Barbara’s skin (or lovely lingerie) peeking through. Not very much, and nothing inappropriate. It wouldn’t work for an audience of mainly older men: too much prurience. But for an audience of young, fashionable, Scandinavian women, I thought it was a bit daring, a bit edgy, and with just the right whiff of sexiness. Not only would Barbara be energized wearing it, but the women in the crowd would be impressed by her audacity, and more receptive to her sometimes-provocative message.

 To me, the dress REINFORCED who Barbara is, what she was saying, and where she was saying it.

Case Study #2: Tel Aviv Video Shoot

One of my other tips to prospective male dress-buyers is to have a few go to stores. When we are in Vancouver, there is a small boutique on Alberni St. called Blubird. I don’t think I have ever been there and NOT seen something that Barbara liked, and frequently on sale. This January I was in Vancouver on my own, and found two dresses that I was very confident she would like, so I bought them, even without her trying them on! (Warning to husbands: this is the advanced class. I wouldn’t have done this a few years ago.) I stuck them in my suitcase, and surprised her with them when we rendezvoused in Calgary the next day. She was thrilled, and I told her that one would be perfect for her upcoming video shoot in Tel Aviv on International Women’s Day. How did I arrive at that conclusion? Take a look at the picture below:

2014-03-08 10.21.19-1

As always, I started with colour. Tel Aviv is also known as The White City, due to the large number of Bauhaus buildings. We had managed a holiday in Hawaii before the Predictions road show started, so I knew Barbara would have enough of a tan to pull off bright white. This was for a video shoot. The first thing to remember about video is that the camera loves solids. Any kind of colour contrast can be distracting, so the monochromatic would be best. As a final subtle touch, there was a texture to the dress that reminded me of the plaster detailing used on some of the Tel Aviv buildings.

Unlike the Iceland dress, the Israel choice was not sleeveless, but had t-shirt sleeves. I would be the first to say that I love Barbara’s arms and shoulders: she works hard at the gym with weights and kettle bells and even gymnastic rings to get that muscle definition. But older women who are very fit tend to have lost some of their subdermal fat, and at certain angles their arms can look very “veiny.” That’s not much of a problem when you are on stage; the audience is too far away. Still photography also is fine; the photographer chooses shots that don’t look unflattering. But video is problematic, and it is all too easy to have a few segments that do NOT look good. So some sort of sleeve was a good idea.

Finally, Barbara gets nervous about video shoots. Like many other women, she tenses up, and stands like a little girl with her legs crossed, and rolls her shoulders forward. Which looks TERRIBLE on video. What works best is a strong, upright, and balanced stance, with your feet about shoulder-width apart. The dress I chose had a very fitted bodice (shoulders back!) and the flared skirt naturally encouraged Barbara to adopt a power stance. The results speak for themselves.

Case Study #3: Did I mention the importance of colour?

2012-08-10 17.45.18

The price was good, the neckline is good, and the jewels are a nice touch. But for a blonde with blue green eyes, the dress above will always make them look amazing. We picked out this dress YEARS ago, and Barbara still wears it multiple times per summer.

I think every husband should get to have as much fun as I have been having. It allows us to turn something that might be a chore into an activity we both can participate in. It saves us tons of money, because Barbara never ends up buying outfits that don’t look good: the most expensive dress you will ever own is the one you don’t wear! Instead, she knows I love her outfits, and that makes her happy, empowered, and adored. Which is a good thing for me too. :)

[Special thanks to my friends Jane Dragone and Marcia Wisniewski, for providing me with the inspiration for this post.]

#3DPrinting is a revolution. Just not the revolution you think it is.

404636_508635439198041_354186298_n_medium

 

I am wearing a 3D printed object right now: my wedding ring is made out of precious metal and is attractive, elegant, well designed, durable, valuable, and something I hope to still have 50 years from now.

And that’s odd, because most of the 3D printed objects you read about are cheap, ugly, and plastic. Calling them trinkets would be too charitable: the word that seems to fit best is ‘tchotchkes.’ The media focus on 3D printing (also known as additive manufacturing, or AM) has been on the idea of a “new technology that promises a factory in every home.” Like a Star Trek Replicator, these devices will soon be ubiquitous, and we will all be printing out our own light switches and cutlery. Not to mention hot Earl Grey tea.

That’s not going to happen. Why?

#1 Too damn finicky: A society where most people can’t be bothered to sharpen their own knives won’t have the patience to learn how to set up a 3D printer and operate it properly. A Facebook friend of mine bought her own machine recently, and has been documenting her adventure on a blog. Hats off to Michelle, but page after page of not preheating the platform properly, buying a new platform material, coating it with glue, the extruder jamming, the object lifting off the platform before it is done…all in order to make a plastic moustache cookie cutter? The machines will require less consumer calibration one day, but in my view not at a reasonable price point within the next 5-10 years. In the meantime, for every Michelle who is willing to tinker there will be 99 people who would quit in frustration: hardly a factory in EVERY home. (I can’t find the source, but I once heard that the majority of power drills were only used once. And they are much easier to use than today’s consumer 3D printers.)

#2 “Plasticky” is not a good thing: Almost all home 3D printers use one of two plastics (ABS or PLA) that come in spools of filament. Plastic melts easily, the layers adhere well, and it is fairly cheap, which is important when you take dozens of tries to get your moustache cookie cutter just right. But while you may think that the cookie cutter is just a fun thing to learn how to make, and the 3D printer will soon be making much more practical objects, that’s not true. There really aren’t that many things most people need in their lives that are best made out of ABS or PLA, and can’t be bought at your local store faster, cheaper and better.

There are 3D printers that work in metal. But a decent one that can make nicely finished objects of a reasonable size costs hundreds of thousands of dollars, or even millions. That price will come down a bit over time, but will still not be at the ‘factory in every home’ level within the next 10 years. Maybe 20! Just to give you some idea, only 348 3D printers that make metal objects were sold in 2013.

Next, even the metal ones only work in certain kinds of metals. As an example, if you go to this website, you might get excited that you can make 3D printed parts out of gold! Not so fast: here is what actually happens:

“Gold models are 3D printed using a complex five-step process. First, the model is printed in wax using a specialized high-resolution 3D printer. It is then put in a container where liquid plaster is poured in around it. When the plaster sets, the wax is melted out in a furnace, and the remaining plaster becomes the mold.”

That’s cool (and more on this later) but the objects themselves are not 3D printed out of gold – only the wax molds are.

IMG_20140720_113451_medium

#3 “Anyone can be a designer” is a hideous lie: The current problems around materials and ease of use will get better over time. But most people have the design talent of a dead stoat. Can a society where millions of people still use Comic Sans be trusted to design their own cutlery? Even if I wanted to design my own spoons (and I never have) my experiences with trying to make things out of wood, clay, paint, Lego, plasticine, or even papier-mâché have shown me that I am not very good at this kind of thing. Even after I taught my fingers how to usefully fabricate an object, my BRAIN doesn’t have the talent to make an adequate object, let alone a beautiful one.

I am not the only one. A recent Globe and Mail article described this phenomenon perfectly:

“Then the piece was printed and my pride was pricked. Rather than epic, the key chain looked jagged and silly. On my laptop screen, I could blow my model up so it looked imposing and impressive and huge. In real life, my dream skyscraper was a sad, little grey lump… A quickly dashed sense of euphoria would be familiar to any industrial designer. Professionals frequently switch between loving and loathing the object they’re creating. The difference between an expert and an amateur, though, is that a pro will keep pushing for perfection. But I don’t have the time or humility or even inclination to keep going – do I even need another key chain?”

Hey. You promised us a revolution – stop being such a downer!

Now it gets good. The future of 3D printing is huge and transformative. But it isn’t about plastic key chains or toys. Some of it does involve small plastic objects: most companies that are using 3D printers today are using them for rapid prototyping. Design a new rear view mirror, print it out in a few hours, and see how it looks on the car or in the wind tunnel. Faster, better and cheaper than how they used to do it. Some are also using one of those 348 metal printers to make advanced aerospace parts. But most consumers (and even most companies) have no need for rapid prototyping or jet engines.

In my view, the biggest potential for 3D printing is in enabling those who DO have design talent to more effectively compete with larger players. And not by using 3D printers alone, but as only one part of the manufacturing process.

Barbara and I recently renewed our wedding vows, and we asked Tara and Courtney Neray of Slashpile Designs to make them for us. I didn’t know it at the time, but they used 3D printers:

vikingringwide_grande

“These pieces were super interesting to work on because they really used such a perfect combination of new technology and traditional jewellery techniques. We first modeled the rings in CAD, without the texture. In this step, we create a file for each ring that is sized to the customer. Each file is 3-D printed in wax and then cast in the metal of choice (18 karat white gold in this case!)”

The first key aspect is that the Additive Manufacturing technology is only part of the process: 3D printing dovetails perfectly with many existing manufacturing techniques. That may disappoint the Star Trek purists, but it actually means that 3D printing will be huge. New technologies that work with existing processes almost always are adopted more rapidly than those that require entirely new ways of doing things.

But why is it a revolution? Historically, a couple of 20-something designers can’t do a lot of custom work fast, or maintain large inventories of many models. I can walk into a large jeweller, see a ring I like, and say “give it to me in a size 7.5” and walk out with a ring in my pocket, or perhaps a week later at most. The Slashpile entrepreneurs can’t keep a supply of finished precious metal rings in dozens of sizes, and they probably can’t even have a full stock of casting molds in the most common sizes made ahead of time.

But with 3D printers, they can make me a custom pair of rings in about a week. Additive manufacturing solves a particular pain point in the manufacturing chain, and dramatically levels the playing field between large manufacturers and the start-up in the garage. Just as PC technology narrowed the gap between the mainframe computer makers and the kids in the Silicon Valley garage.

The story above was about jewelry, but the exact same barriers and solutions exist in multiple industries, and 3D printing – as part of the existing manufacturing process – will be a critical tool.

Welcome to the revolution.

[My Deloitte colleague Eric Openshaw has an article on 3D printing on LinkedIn. It is a short-but-great read, and makes similar points. Here’s my favourite quote:

“In addition, AM makes the supply chain more flexible and agile. Product life cycles are shortening, which puts a premium on speed to market. Since the initial costs can be lower than those of traditional manufacturing, AM can offer competitive per-unit costs at levels below the scale required by traditional manufacturing.”]

 

[Edited to add. I realise that my comments about my friend Michelle Toy could be misinterpreted. Although I don't think her 3D printer will become 'the factory in her home' either, she is out there learning about additive manufacturing in a very real and hands-on way. Leaning any skill is good for you, and good for your brain, and almost always useful.

Back in 1984 my father was teaching a course at BCIT on microprocessors and computers. Early days! He needed someone to build, test and program the machines his class was going to use, and he asked me to do it. I learned how to solder better, how to read resistors, played around with DIP switches and even programmed in hexadecimal. Those machines are less than toys today...but the knowledge I acquired has helped me at least once per month in the 30 years since then.]

Only abnormal people watch a lot of video on their smartphones

heres-what-people-are-watching-on-their-smartphones-and-tablets

Ok, I am trolling you a bit. But what if I told you that 10% of American adults watch 85% of all the video seen on American smartphones (web OR apps?) From a statistical perspective, “normal” American adults don’t use their smartphones for significant amounts of video.

This is all based on the Nielsen Cross Platform Report for the fourth quarter of 2013. It features the following chart, which makes it look like watching videos on smartphones is a big and fast-growing phenomenon. And it is!

Nielsen Smartphone Viewing

Over a mere 12 month period, Americans 18+ who watched ANY smartphone video rose from about 81 million to almost 102 million individuals, a 26% increase. Further, the average monthly time spent watching video on smartphones rose from just over an hour to nearly one hour and 24 minutes, or just under 40% year-over-year growth. More people watching and each one watching longer: sounds like a big trend. In fact, Americans watched 142 million hours of video per month on their smartphones in Q4 2013, up from 81 million hours the year before, a nearly 75% increase.

All media is consumed unevenly. Some people watch a lot of traditional TV, others watch less, and others watch none. Same with radio, or the internet, or opera. Normally, the distribution is fairly predictable – there is even a name for it: the 80/20 rule, also known as the Pareto Principle. In business, the traditional phrase is “80% of your sales come from 20% of your clients.”

US Smartphone Video

But smartphone video consumption in the US is NOT following that trend. Although just over 100 million Americans 18+ watch video on their smartphones, there are 242 million Americans who are over 18: nearly 60% of Americans NEVER watch video on their phones. Further, if you look at the quartiles of Americans who DO watch at all, you can see that only the top quartile is consuming more than two minutes per day of smartphone video, the rest range from just over a minute a day to just over 3 SECONDS per day!

Looking at it in aggregate, the 25 million American adults who watched the most smartphone video watched about 120 million hours per month, while the other 90% of Americans watched just over 20 million hours. About 10% watch 85%: watching more than two minutes or more video on a smartphone per day is highly abnormal.

[Disclaimer time:

  1. All of the above is based on published Nielsen data. As with any form of media measurement, there may be errors, sampling biases, or other problems.
  2. Next, the Nielsen data for smartphone video is adults only – they do not have data for that metric for younger Americans. It seems likely that those under 18 are watching a fair bit of smartphone video too.
  3. Importantly, the fact that they only have data on 18+ suggests that the smartphone video data is being done through a poll or a panel, rather than the more accurate metered measurement technology. Panels and polls have their uses, but people tend to be inaccurate as self-reported media measurement. Especially about NEW and sexy technologies. When you ask a thousand people how much radio they listened to in the last month, they under-report. And when you ask them how much online video, they tend to overestimate. It seems likely that smartphone video would be the same. Therefore, the Nielsen data (in my view) probably OVERESTIMATES the amount of smartphone video being watched.
  4. Although there are 242 million adult Americans, only 58% owned a smartphone, or just over 140 million people. Therefore, there are three ways of looking at the quartile who watch the most smartphone video. All three are equally true and equally valid.
    1. 85% of all smartphone video is watched by 10% of all adult Americans.
    2. 85% of all smartphone video is watched by 18% of all adult Americans who own a smartphone, including those who watch no video at all. You will note that is fairly close to the Pareto distribution one might expect. From the perspective of smartphone makers, smartphone video is shaking out about as expected. But from the perspective of advertisers who are comparing smartphone video reach to something like TV, the total population is likely more relevant.
    3. 85% of all smartphone video is watched by 25% of all adult Americans who own a smartphone and watch any video on the device at all.
  5. This all needs to be placed in context of overall video watching. According to Nielsen, the average American over the age of two spends 191 hours per month watching live TV, time-shifted TV, a DVD/BluRay device, a game console (some portion of which is stuff like Netflix), or video on the internet but not on a smartphone. In other words, even for the 20 million American adults who watch the most video on a smartphone of just under 5 hours per month, that represents less than 3% of their total monthly video consumption.
  6. When I first wrote this post, I put up the pie chart above. It occurred to me that there are other ways of depicting the numbers. Below I have attached a pie chart showing all adult Americans, and it shows the various populations (in millions of people) and which viewing bucket they fit in. Same data, just though a different filter.]
    Nielsen smartphone V2

Looks like I am going to be wrong about #phablets

 phablet-pants

I have travelled the world in the last six months, telling everyone that phablets (smartphones with screens 5.0 inches or larger) would be a big thing: representing 25% of all smartphone sales, these devices would be a $125 billion business, even bigger than TV sets or tablets!

And in almost every market, my audiences have openly mocked me. No one would want such a bulky and clumsy device, which makes you look ridiculous when calling, and doesn’t fit in your jeans. Out of nearly ten thousand people, fewer than a hundred have admitted they even MIGHT buy one. “Face it Duncan,” they said “phablets will be a phony phaddish phailure.”

And the wisdom of the crowd triumphs again: there is no way phablets will be 25% of smartphone sales for 2014. I was wrong.

Based on Q1 shipments alone, Canalys says that phablets are already 34% of units sold.

 Canalys Phablet

Given the growth trajectory, and rumours of a 5.5 inch Apple phablet shipping in time for Q4, it appears that somewhere around 40% of all smartphones sold this year will be phablets, with about 40% of those phablets having screens 5.5 inches or larger.

mea-culpa-26-1-13

Why TV Everywhere…isn’t.

CTAM-tve-m_zpscc77c6d7

According to everybody, TV viewers in 2014 demand the ability to “watch what they want, when they want, on whatever device they want.” Traditional broadcast TV can’t do that, so conventional broadcasters and distributors in North America have spent millions of dollars building a service that is usually known as TV Everywhere. The TV industry folks I talk to are convinced that TV Everywhere will be the salvation of traditional TV, and defend them from cord-cutting (although cord-cutting isn’t happening in size…yet.) And they are really upset: it isn’t going as well as they hoped.

There’s no need for the “TV Nowhere” headlines to start flying. A recent survey from NPD Group found that 21% of Americans who subscribe to pay TV use their provider’s TV Everywhere service at least once a month, and 90% of those are happy with the service. Hooray! Problem solved, right?

Not so fast. The average American watches TV daily, not monthly. Next, my skeptic alarm goes off when I don’t see data released around the number of hours of TV Everywhere being watched. According to the latest Nielsen numbers, the average American watched 155 hours of traditional TV per month, almost 15 hours of time shifted TV on PVRs, and about 9 hours watching TV on the internet or on a smartphone. Where does TV Everywhere stand? No one is saying, and that’s usually not a good sign.

To be clear, I am sure TV Everywhere is being used, it will grow year over year, and it will represent an increasing number of viewing hours over time. But I think there are two big problems.

You can’t watch ‘what you want’ if you don’t know what you want!

I will use my own viewing habits as an example. About half my TV watching is on the big screen TV, with a signal from my cable company. I like NFL football, but I never watch it on a tablet, PC or smartphone. Instead I watch it live and on the big set – as most people do with sports. I like playing along with Jeopardy! while I am making dinner: Mondays through Saturday, 7:30 pm on Channel 11, with the TV pointed towards the kitchen. I could put it on the PC behind me, but the screen is too small to read the clues, while the TV set is perfect.

The other half of my viewing is mainly Internet video grazing (a funny video of pets interfering with their owners doing yoga; that really strong girl on American Ninja; or John Oliver doing a takedown on Dr. Oz) or occasionally a show that a search algorithm has recommended to me and that I watch on streaming Video-On-Demand (a la Netflix.)

I discover the first category primarily through social media like Facebook and Twitter. People I like and trust, and who have similar tastes, recommend videos, I click on them, and share. I have never seen the John Oliver show before, but not only did I like his Dr. Oz segment, I also loved the FIFA rant. So why don’t I use my version of TV Everywhere to watch Last Week Tonight With John Oliver any time I want?

%20John%20Oliver%20o%20FIFA%20and%20corruption%20-09-06-2014_0

Because life is too short, and not every episode is funny. So I let the Internet curate the content, and have my social peeps be my recommendation engine and filter.

TV Everywhere doesn’t do that.

Once you’re off the freeway, you don’t want to eat at Denny’s

frenchtoastmenu-e1362675115452

I have driven from Vancouver to Toronto six times now: about 4,400 km each way, and I drove through the US for better highways and cheaper gas. I-90, with a swing down to I-80 to avoid the traffic around Chicago. I was young and poor, and trying to make good time: breakfast and lunch were at McDonalds right beside the freeway, the motels (or campgrounds) at night were beside the freeway, and my dinners were at “restaurants” beside the freeway. This was back in the 1980s, and the choice of restaurants was VERY limited. One time, I was driving though Iowa, and was well into Day 3 of driving. I couldn’t take one more dinner at Denny’s, so actually LEFT THE HIGHWAY and drove into town down Route 61, aka Welcome Way.

All of my usual freeway dining choices were there too, but once I had made the decision to leave the highway, I became effectively ‘blind’ to them as possible dining options.

In the same way, once viewers decide to watch something on a PC, tablet or smartphone, they are looking for something different than regular TV. For many of them, the idea of watching the same stuff as on traditional TV, but now on demand (i.e. TV Everywhere) is as useful as still going to Denny’s…but getting the Thousand Island dressing instead of the Ranch. It is different, but it’s not different enough.

 

 

 

Dine and Dash: the killer app for mobile payments?

photo-8-e1400633384440

Mobile payments have been slow to take off in most of the developed world. In places like East Africa, they are booming, because most people are un-banked or under-banked: they use mobile because they have no alternative. But where 99% of the population already has debit and credit cards, mobile payments are lagging expectations. The rule seems to be that mobile will only take off when it solves a specific payments problem that traditional payment cards don’t.

Up until now, transit has seemed to be the leading killer app for mobile payments. But, as this article points out, restaurant bills can also be a real PITA — both for diners and for wait staff.

“Instead of going through the rigamarole [sic] of busting out their credit cards or splitting the check at the end, diners merely check in with the app when they arrive and alert their server that they’re paying with Cover. When a meal is over, payment will be made through the app, and will automatically be split based on the number of diners that were in the party. No more calculating the tip, figuring out each person’s contribution, or waiting for change or credit cards to be swiped and returned. You just get up and go.”

Not only is this a big win for diners, it is also fairly cheap and easy for restaurants to set up. That’s important: restos tend to have skinny profit margins, no time for training, and high employee turnover.

“But, just as importantly, there’s very little setup involved in adding Cover to their workflow. While most restaurants are used to having significant upfront costs associated with hardware and training staff whenever new technology is put in place, adding Cover payments is basically free and can be set up in minutes.”

9gcyH4C3KS-2

To be clear, I am not endorsing Cover specifically. But the category of mobile payment apps for dining is one that may do better than some other payment verticals.

The Singularity may not be as close as you think

Objects in mirror

Ray Kurzweil didn’t invent the concept of the technological singularity, but his 2005 book The Singularity Is Near is the best known use of the term, and the obvious inspiration for the title of this lengthy blog post. The book makes many arguments and predictions, but the most famous prediction was that by the year 2045 artificial machine intelligence (strong AI) will exceed the combined intelligence of all the world’s human brains.

The idea of more-than-human strong machine intelligence didn’t start with Kurzweil. As merely one example, Robert Heinlein’s The Moon is a Harsh Mistress (1966) has a sentient computer nicknamed Mike, and even describes how it achieves consciousness: “Human brain has around ten-to-the-tenth neurons…Mike had better than one and a half times that number of neuristors. And woke up.”

The analogy made a lot of sense. The things that we believed were solely responsible for brain function in human brains seemed to work an awful lot like the on/off switching roles that transistors played in computer brains. Maybe human brains were a bit more complex, but at some point the machines would catch us up, and then pass us.

Kurzweil’s argument is considerably more complex than Heinlein’s, as would be expected 40 years later. He argues that the human brain is capable of around “1016 calculations per second and 1013 bits of memory” and that better understanding of the brain (mainly through better imaging) will allow us to combine Moore’s Law and other new technologies to create strong machine intelligence. Concepts like ‘calculations per second’ (more on this later) have led directly to charts like this from Kurzweil’s book:

PPTExponentialGrowthof_Computing

Needless to say, this kind of prediction is perfect fodder for sensational media stories. We’ve all grown up on Frankenstein, HAL 9000 and Skynet, and the headline “By 2045 ‘The Top Species Will No Longer Be Humans,’ And That Could Be A Problem” was just begging to be written.

But there’s a problem: although there are those who talk about the Singularity and strong artificial intelligence occurring 30 years from now, there are another bunch of very smart people who say it is unlikely to be anywhere that soon. And the reason they think so isn’t so much that they argue that current machines aren’t that smart, it is that we don’t know enough about the human brain.

Jaron Lanier (who — like Kurzweil — is NOT a true AI researcher, merely someone who writes well about the topic) said this week “We don’t yet understand how brains work, so we can’t build one.”

That’s a really important point. The Wright brothers spent hours observing soaring birds at the Pinnacles in Ohio, saw that they twisted their wing tips to steer, and incorporated that into their wing warping theory of 1899. They were able to create artificial flight because they had a model of natural flight.

Decades ago, brain scientists thought they had an equally clear model of how human brains worked: neurons were composed of dendrites and axons, and the gaps between neurons were synapses, and electrical signals propagated along the neuron just like messages along a wire. They still didn’t have a clue where consciousness came from, but they thought they had a good model of the brain.

Since then, scientists keep discovering that the reality is far more complex, and there are all kinds of activation pathways, neurotransmitters, long term potentiation, Glial cells, plasticity, and (although consensus is against this) perhaps even quantum effects. I’m not a brain researcher, but I do follow the literature. And we don’t appear to know enough to allow AI researchers to mimic or simulate all these various details and processes in machine intelligences.

[This bit is only for those who are really interested in brain function. Kurzweil’s assumption was that the human brain is capable of around 1016 calculations per second, based on estimates that the adult human brain has around 1011 (100 billion) neurons and 1014 (100 trillion) synapses. As of 2005, that seemed like a reasonable way of looking at the subject. However, since then scientists have learned that Glial cells may be much more important that we thought only a decade ago. ‘Glia’ is Greek for glue, and historically these cells were thought to kind of hold the brain together, but not play a direct role in cognition. This now appears to be untrue: Glial cells can make their own synapses, they make up a MUCH greater percentage of brain tissue in more intelligent animals (a linear relationship, in fact) and there are about 10x as many of them in the human brain as neuronal cells. Kurzweil’s assumptions about the number of calculations per second MAY be accurate. Or they may be anywhere from hundreds to hundreds of thousands times too low. Perhaps most importantly, the very idea of trying to compare the way computers ‘think’ (FLoating -point Operations Per Second, or FLOPS, which are digital and can be summed) with how the human brain works (which is an analog, stochastic process) may not be a good way of thinking about thinking at all.]

If you do a survey of strong AI researchers, rather than popularisers, you still get a median value of around 2040. But the tricky bit is the range of opinions: Kurzweil and his group are clustered around 2030-2045…but there is another large group that thinks it may be a 100 years off. To quote the guy who did the meta-analysis of all the informed views “…my current 80% estimate is something like five to 100 years.” That’s a range you could drive a truck through.

The more pessimistic group points out that although we now have computers that can beat world champions at chess or Jeopardy!, and even fool a percentage of people into thinking they are talking to a real person, these computers are almost certainly not doing that in any way that is similar to how the human brain works. The technologies that enable things like Watson and Deep Blue are weak AI, and are potentially useful, but they should not necessarily be considered stepping stones on the path to strong AI.

watson-game-top-1

Based on my experience following this field since the mid-1970s, I am now leaning (sadly) to the view that the pessimists will be correct. Don’t get me wrong: at any point there could be a breakthrough in our understanding of the brain, or in new technologies that are better able to mimic the human brain, or both. And the Singularity could occur in the next 12 months. But that’s not PROBABLE, and from a probability perspective I would be surprised to see the Singularity before my 100th birthday, 50 years from now in 2064. And I would not be surprised if it still hadn’t happened in 2114.

So who cares about the Singularity? If it is likely to not happen until next century, then any effort spent thinking about it now is a waste of time, right?

In the early 1960s, hot on the heels of the Cuban Missile Crisis and Mutually Assured Destruction (MAD) nuclear war scenarios, American musical satirist Tom Lehrer wrote a song that was what he referred to as ‘pre-nostalgia’. Called “So Long, Mom (A Song for World War III)”, he explained his rationale:

“It occurred to me that if any songs are going to come out of World War III…we better start writing them now.”

In the same way, the time to start thinking about strong AI, the Singularity, and related topics is BEFORE they occur, not after.

This will matter for public policy makers, those in the TMT industry, and anyone whose business might be affected by an equal-to-or-greater-than-human machine intelligence. Which is more or less everyone!

Next, even if the Singularity (with a capital S) doesn’t happen for 100 years, the exercise of thinking about what kinds of effects stronger artificial intelligence will have on business models and society is a wonderful thought experiment, and one that leads to useful strategy discussions, even over the relatively short term.

I would like to once again thank my friend Brian Piccioni, who has discussed the topic of strong and weak AI with me over many lunches and coffees in the past ten years, and who briefly reviewed this article. All errors and omissions are mine, of course.

Follow

Get every new post delivered to your Inbox.

Join 25 other followers