Only abnormal people watch a lot of video on their smartphones

heres-what-people-are-watching-on-their-smartphones-and-tablets

Ok, I am trolling you a bit. But what if I told you that 10% of American adults watch 85% of all the video seen on American smartphones (web OR apps?) From a statistical perspective, “normal” American adults don’t use their smartphones for significant amounts of video.

This is all based on the Nielsen Cross Platform Report for the fourth quarter of 2013. It features the following chart, which makes it look like watching videos on smartphones is a big and fast-growing phenomenon. And it is!

Nielsen Smartphone Viewing

Over a mere 12 month period, Americans 18+ who watched ANY smartphone video rose from about 81 million to almost 102 million individuals, a 26% increase. Further, the average monthly time spent watching video on smartphones rose from just over an hour to nearly one hour and 24 minutes, or just under 40% year-over-year growth. More people watching and each one watching longer: sounds like a big trend. In fact, Americans watched 142 million hours of video per month on their smartphones in Q4 2013, up from 81 million hours the year before, a nearly 75% increase.

All media is consumed unevenly. Some people watch a lot of traditional TV, others watch less, and others watch none. Same with radio, or the internet, or opera. Normally, the distribution is fairly predictable – there is even a name for it: the 80/20 rule, also known as the Pareto Principle. In business, the traditional phrase is “80% of your sales come from 20% of your clients.”

US Smartphone Video

But smartphone video consumption in the US is NOT following that trend. Although just over 100 million Americans 18+ watch video on their smartphones, there are 242 million Americans who are over 18: nearly 60% of Americans NEVER watch video on their phones. Further, if you look at the quartiles of Americans who DO watch at all, you can see that only the top quartile is consuming more than two minutes per day of smartphone video, the rest range from just over a minute a day to just over 3 SECONDS per day!

Looking at it in aggregate, the 25 million American adults who watched the most smartphone video watched about 120 million hours per month, while the other 90% of Americans watched just over 20 million hours. About 10% watch 85%: watching more than two minutes or more video on a smartphone per day is highly abnormal.

[Disclaimer time:

  1. All of the above is based on published Nielsen data. As with any form of media measurement, there may be errors, sampling biases, or other problems.
  2. Next, the Nielsen data for smartphone video is adults only – they do not have data for that metric for younger Americans. It seems likely that those under 18 are watching a fair bit of smartphone video too.
  3. Importantly, the fact that they only have data on 18+ suggests that the smartphone video data is being done through a poll or a panel, rather than the more accurate metered measurement technology. Panels and polls have their uses, but people tend to be inaccurate as self-reported media measurement. Especially about NEW and sexy technologies. When you ask a thousand people how much radio they listened to in the last month, they under-report. And when you ask them how much online video, they tend to overestimate. It seems likely that smartphone video would be the same. Therefore, the Nielsen data (in my view) probably OVERESTIMATES the amount of smartphone video being watched.
  4. Although there are 242 million adult Americans, only 58% owned a smartphone, or just over 140 million people. Therefore, there are three ways of looking at the quartile who watch the most smartphone video. All three are equally true and equally valid.
    1. 85% of all smartphone video is watched by 10% of all adult Americans.
    2. 85% of all smartphone video is watched by 18% of all adult Americans who own a smartphone, including those who watch no video at all. You will note that is fairly close to the Pareto distribution one might expect. From the perspective of smartphone makers, smartphone video is shaking out about as expected. But from the perspective of advertisers who are comparing smartphone video reach to something like TV, the total population is likely more relevant.
    3. 85% of all smartphone video is watched by 25% of all adult Americans who own a smartphone and watch any video on the device at all.
  5. This all needs to be placed in context of overall video watching. According to Nielsen, the average American over the age of two spends 191 hours per month watching live TV, time-shifted TV, a DVD/BluRay device, a game console (some portion of which is stuff like Netflix), or video on the internet but not on a smartphone. In other words, even for the 20 million American adults who watch the most video on a smartphone of just under 5 hours per month, that represents less than 3% of their total monthly video consumption.
  6. When I first wrote this post, I put up the pie chart above. It occurred to me that there are other ways of depicting the numbers. Below I have attached a pie chart showing all adult Americans, and it shows the various populations (in millions of people) and which viewing bucket they fit in. Same data, just though a different filter.]
    Nielsen smartphone V2

Looks like I am going to be wrong about #phablets

 phablet-pants

I have travelled the world in the last six months, telling everyone that phablets (smartphones with screens 5.0 inches or larger) would be a big thing: representing 25% of all smartphone sales, these devices would be a $125 billion business, even bigger than TV sets or tablets!

And in almost every market, my audiences have openly mocked me. No one would want such a bulky and clumsy device, which makes you look ridiculous when calling, and doesn’t fit in your jeans. Out of nearly ten thousand people, fewer than a hundred have admitted they even MIGHT buy one. “Face it Duncan,” they said “phablets will be a phony phaddish phailure.”

And the wisdom of the crowd triumphs again: there is no way phablets will be 25% of smartphone sales for 2014. I was wrong.

Based on Q1 shipments alone, Canalys says that phablets are already 34% of units sold.

 Canalys Phablet

Given the growth trajectory, and rumours of a 5.5 inch Apple phablet shipping in time for Q4, it appears that somewhere around 40% of all smartphones sold this year will be phablets, with about 40% of those phablets having screens 5.5 inches or larger.

mea-culpa-26-1-13

Why TV Everywhere…isn’t.

CTAM-tve-m_zpscc77c6d7

According to everybody, TV viewers in 2014 demand the ability to “watch what they want, when they want, on whatever device they want.” Traditional broadcast TV can’t do that, so conventional broadcasters and distributors in North America have spent millions of dollars building a service that is usually known as TV Everywhere. The TV industry folks I talk to are convinced that TV Everywhere will be the salvation of traditional TV, and defend them from cord-cutting (although cord-cutting isn’t happening in size…yet.) And they are really upset: it isn’t going as well as they hoped.

There’s no need for the “TV Nowhere” headlines to start flying. A recent survey from NPD Group found that 21% of Americans who subscribe to pay TV use their provider’s TV Everywhere service at least once a month, and 90% of those are happy with the service. Hooray! Problem solved, right?

Not so fast. The average American watches TV daily, not monthly. Next, my skeptic alarm goes off when I don’t see data released around the number of hours of TV Everywhere being watched. According to the latest Nielsen numbers, the average American watched 155 hours of traditional TV per month, almost 15 hours of time shifted TV on PVRs, and about 9 hours watching TV on the internet or on a smartphone. Where does TV Everywhere stand? No one is saying, and that’s usually not a good sign.

To be clear, I am sure TV Everywhere is being used, it will grow year over year, and it will represent an increasing number of viewing hours over time. But I think there are two big problems.

You can’t watch ‘what you want’ if you don’t know what you want!

I will use my own viewing habits as an example. About half my TV watching is on the big screen TV, with a signal from my cable company. I like NFL football, but I never watch it on a tablet, PC or smartphone. Instead I watch it live and on the big set – as most people do with sports. I like playing along with Jeopardy! while I am making dinner: Mondays through Saturday, 7:30 pm on Channel 11, with the TV pointed towards the kitchen. I could put it on the PC behind me, but the screen is too small to read the clues, while the TV set is perfect.

The other half of my viewing is mainly Internet video grazing (a funny video of pets interfering with their owners doing yoga; that really strong girl on American Ninja; or John Oliver doing a takedown on Dr. Oz) or occasionally a show that a search algorithm has recommended to me and that I watch on streaming Video-On-Demand (a la Netflix.)

I discover the first category primarily through social media like Facebook and Twitter. People I like and trust, and who have similar tastes, recommend videos, I click on them, and share. I have never seen the John Oliver show before, but not only did I like his Dr. Oz segment, I also loved the FIFA rant. So why don’t I use my version of TV Everywhere to watch Last Week Tonight With John Oliver any time I want?

%20John%20Oliver%20o%20FIFA%20and%20corruption%20-09-06-2014_0

Because life is too short, and not every episode is funny. So I let the Internet curate the content, and have my social peeps be my recommendation engine and filter.

TV Everywhere doesn’t do that.

Once you’re off the freeway, you don’t want to eat at Denny’s

frenchtoastmenu-e1362675115452

I have driven from Vancouver to Toronto six times now: about 4,400 km each way, and I drove through the US for better highways and cheaper gas. I-90, with a swing down to I-80 to avoid the traffic around Chicago. I was young and poor, and trying to make good time: breakfast and lunch were at McDonalds right beside the freeway, the motels (or campgrounds) at night were beside the freeway, and my dinners were at “restaurants” beside the freeway. This was back in the 1980s, and the choice of restaurants was VERY limited. One time, I was driving though Iowa, and was well into Day 3 of driving. I couldn’t take one more dinner at Denny’s, so actually LEFT THE HIGHWAY and drove into town down Route 61, aka Welcome Way.

All of my usual freeway dining choices were there too, but once I had made the decision to leave the highway, I became effectively ‘blind’ to them as possible dining options.

In the same way, once viewers decide to watch something on a PC, tablet or smartphone, they are looking for something different than regular TV. For many of them, the idea of watching the same stuff as on traditional TV, but now on demand (i.e. TV Everywhere) is as useful as still going to Denny’s…but getting the Thousand Island dressing instead of the Ranch. It is different, but it’s not different enough.

 

 

 

Dine and Dash: the killer app for mobile payments?

photo-8-e1400633384440

Mobile payments have been slow to take off in most of the developed world. In places like East Africa, they are booming, because most people are un-banked or under-banked: they use mobile because they have no alternative. But where 99% of the population already has debit and credit cards, mobile payments are lagging expectations. The rule seems to be that mobile will only take off when it solves a specific payments problem that traditional payment cards don’t.

Up until now, transit has seemed to be the leading killer app for mobile payments. But, as this article points out, restaurant bills can also be a real PITA — both for diners and for wait staff.

“Instead of going through the rigamarole [sic] of busting out their credit cards or splitting the check at the end, diners merely check in with the app when they arrive and alert their server that they’re paying with Cover. When a meal is over, payment will be made through the app, and will automatically be split based on the number of diners that were in the party. No more calculating the tip, figuring out each person’s contribution, or waiting for change or credit cards to be swiped and returned. You just get up and go.”

Not only is this a big win for diners, it is also fairly cheap and easy for restaurants to set up. That’s important: restos tend to have skinny profit margins, no time for training, and high employee turnover.

“But, just as importantly, there’s very little setup involved in adding Cover to their workflow. While most restaurants are used to having significant upfront costs associated with hardware and training staff whenever new technology is put in place, adding Cover payments is basically free and can be set up in minutes.”

9gcyH4C3KS-2

To be clear, I am not endorsing Cover specifically. But the category of mobile payment apps for dining is one that may do better than some other payment verticals.

The Singularity may not be as close as you think

Objects in mirror

Ray Kurzweil didn’t invent the concept of the technological singularity, but his 2005 book The Singularity Is Near is the best known use of the term, and the obvious inspiration for the title of this lengthy blog post. The book makes many arguments and predictions, but the most famous prediction was that by the year 2045 artificial machine intelligence (strong AI) will exceed the combined intelligence of all the world’s human brains.

The idea of more-than-human strong machine intelligence didn’t start with Kurzweil. As merely one example, Robert Heinlein’s The Moon is a Harsh Mistress (1966) has a sentient computer nicknamed Mike, and even describes how it achieves consciousness: “Human brain has around ten-to-the-tenth neurons…Mike had better than one and a half times that number of neuristors. And woke up.”

The analogy made a lot of sense. The things that we believed were solely responsible for brain function in human brains seemed to work an awful lot like the on/off switching roles that transistors played in computer brains. Maybe human brains were a bit more complex, but at some point the machines would catch us up, and then pass us.

Kurzweil’s argument is considerably more complex than Heinlein’s, as would be expected 40 years later. He argues that the human brain is capable of around “1016 calculations per second and 1013 bits of memory” and that better understanding of the brain (mainly through better imaging) will allow us to combine Moore’s Law and other new technologies to create strong machine intelligence. Concepts like ‘calculations per second’ (more on this later) have led directly to charts like this from Kurzweil’s book:

PPTExponentialGrowthof_Computing

Needless to say, this kind of prediction is perfect fodder for sensational media stories. We’ve all grown up on Frankenstein, HAL 9000 and Skynet, and the headline “By 2045 ‘The Top Species Will No Longer Be Humans,’ And That Could Be A Problem” was just begging to be written.

But there’s a problem: although there are those who talk about the Singularity and strong artificial intelligence occurring 30 years from now, there are another bunch of very smart people who say it is unlikely to be anywhere that soon. And the reason they think so isn’t so much that they argue that current machines aren’t that smart, it is that we don’t know enough about the human brain.

Jaron Lanier (who — like Kurzweil — is NOT a true AI researcher, merely someone who writes well about the topic) said this week “We don’t yet understand how brains work, so we can’t build one.”

That’s a really important point. The Wright brothers spent hours observing soaring birds at the Pinnacles in Ohio, saw that they twisted their wing tips to steer, and incorporated that into their wing warping theory of 1899. They were able to create artificial flight because they had a model of natural flight.

Decades ago, brain scientists thought they had an equally clear model of how human brains worked: neurons were composed of dendrites and axons, and the gaps between neurons were synapses, and electrical signals propagated along the neuron just like messages along a wire. They still didn’t have a clue where consciousness came from, but they thought they had a good model of the brain.

Since then, scientists keep discovering that the reality is far more complex, and there are all kinds of activation pathways, neurotransmitters, long term potentiation, Glial cells, plasticity, and (although consensus is against this) perhaps even quantum effects. I’m not a brain researcher, but I do follow the literature. And we don’t appear to know enough to allow AI researchers to mimic or simulate all these various details and processes in machine intelligences.

[This bit is only for those who are really interested in brain function. Kurzweil’s assumption was that the human brain is capable of around 1016 calculations per second, based on estimates that the adult human brain has around 1011 (100 billion) neurons and 1014 (100 trillion) synapses. As of 2005, that seemed like a reasonable way of looking at the subject. However, since then scientists have learned that Glial cells may be much more important that we thought only a decade ago. ‘Glia’ is Greek for glue, and historically these cells were thought to kind of hold the brain together, but not play a direct role in cognition. This now appears to be untrue: Glial cells can make their own synapses, they make up a MUCH greater percentage of brain tissue in more intelligent animals (a linear relationship, in fact) and there are about 10x as many of them in the human brain as neuronal cells. Kurzweil’s assumptions about the number of calculations per second MAY be accurate. Or they may be anywhere from hundreds to hundreds of thousands times too low. Perhaps most importantly, the very idea of trying to compare the way computers ‘think’ (FLoating -point Operations Per Second, or FLOPS, which are digital and can be summed) with how the human brain works (which is an analog, stochastic process) may not be a good way of thinking about thinking at all.]

If you do a survey of strong AI researchers, rather than popularisers, you still get a median value of around 2040. But the tricky bit is the range of opinions: Kurzweil and his group are clustered around 2030-2045…but there is another large group that thinks it may be a 100 years off. To quote the guy who did the meta-analysis of all the informed views “…my current 80% estimate is something like five to 100 years.” That’s a range you could drive a truck through.

The more pessimistic group points out that although we now have computers that can beat world champions at chess or Jeopardy!, and even fool a percentage of people into thinking they are talking to a real person, these computers are almost certainly not doing that in any way that is similar to how the human brain works. The technologies that enable things like Watson and Deep Blue are weak AI, and are potentially useful, but they should not necessarily be considered stepping stones on the path to strong AI.

watson-game-top-1

Based on my experience following this field since the mid-1970s, I am now leaning (sadly) to the view that the pessimists will be correct. Don’t get me wrong: at any point there could be a breakthrough in our understanding of the brain, or in new technologies that are better able to mimic the human brain, or both. And the Singularity could occur in the next 12 months. But that’s not PROBABLE, and from a probability perspective I would be surprised to see the Singularity before my 100th birthday, 50 years from now in 2064. And I would not be surprised if it still hadn’t happened in 2114.

So who cares about the Singularity? If it is likely to not happen until next century, then any effort spent thinking about it now is a waste of time, right?

In the early 1960s, hot on the heels of the Cuban Missile Crisis and Mutually Assured Destruction (MAD) nuclear war scenarios, American musical satirist Tom Lehrer wrote a song that was what he referred to as ‘pre-nostalgia’. Called “So Long, Mom (A Song for World War III)”, he explained his rationale:

“It occurred to me that if any songs are going to come out of World War III…we better start writing them now.”

In the same way, the time to start thinking about strong AI, the Singularity, and related topics is BEFORE they occur, not after.

This will matter for public policy makers, those in the TMT industry, and anyone whose business might be affected by an equal-to-or-greater-than-human machine intelligence. Which is more or less everyone!

Next, even if the Singularity (with a capital S) doesn’t happen for 100 years, the exercise of thinking about what kinds of effects stronger artificial intelligence will have on business models and society is a wonderful thought experiment, and one that leads to useful strategy discussions, even over the relatively short term.

I would like to once again thank my friend Brian Piccioni, who has discussed the topic of strong and weak AI with me over many lunches and coffees in the past ten years, and who briefly reviewed this article. All errors and omissions are mine, of course.

Culture changes even faster than technology

Morganatic Times

Last week was the anniversary of the assassination of Archduke Franz Ferdinand and his wife. A century ago, that event triggered WWI, and the front page of the Washington Times told the news. As you can see from the picture, the sub-sub-headline was:

“Fires Several Shots, All of Which Lodged in Vital Parts, and Francis Ferdinand of Austria and Sophie Chotek, His Morganatic Wife, Were Found to Have Been Killed Instantly.”

Now I have a History Major at UBC, love obscure words, and was a contestant on Jeopardy!

Duncan_Stewart_6110

But I still had to go look up what ‘morganatic’ meant. (Wikipedia: “a morganatic marriage is a marriage between people of unequal social rank, which prevents the passage of the husband’s titles and privileges to the wife and any children born of the marriage. Now rare, it is also known as a left-handed marriage because in the wedding ceremony the groom traditionally held his bride’s right hand with his left hand instead of his right.”)

This word, which is so obscure and little-known today that only Conrad Black would use it in a newspaper column, was put in a headline of a mass-market daily only 100 years ago! I would guess that the editors of the Times in 1914 assumed that 70-90% of their readers would know what morganatic meant. Today, I would be surprised if 1% of readers did.

The news of the assassination would have been transmitted by commercial wireline telegraph service: a technology that isn’t in use anywhere in the world today. But people still know what a telegraph is, and maybe even know a few scraps of Morse code. The front page would have been read by kerosene lamps in many homes. They aren’t as common today, but many Canadian cottages still have a few in case of power failures. The story was written on manual typewriters, and my kids know what those are, recognise them, and can even buy a conversion kit for a USB connected typewriter keyboard!

usb_typewriter-740x493

In contrast, although there are still 26 active sovereign monarchies in the world in 2014, the idea that a marriage between a commoner and a royal required a special term (and legal structure) has become culturally inconceivable: I might even use the word ‘extinct.’

Equally, I would predict that the changes over the next century caused by technology will not always prove to be more complete than those caused by culture, by attitudes, or by human psychology.

 

 

Anybody can draw a red square on a map of Africa: filling it with solar is the $64 trillion question.

  Solar-requirement

 “We could power the WHOLE WORLD with a solar array the size of the biggest red box? That square is so tiny…it’s only the size of West Virginia, which is the 9th smallest state!” That’s a compelling story, and you can see why it has been going viral on my social media feed lately. But the graphic is badly misleading.

First, while the red square looks small, Algeria (the country on which it drawn) is the biggest country in Africa, and 10th biggest in the world at nearly a million square miles.

Next, solar arrays are normally measured in square feet (or meters), not square miles or kilometers. That small red box is around 25,000 square miles, which is indeed about the size of West Virginia (24,230 mi2.)

The West Virginia Division of Highways is responsible for 72,000 lane miles, which are 90% of the roads in the state. A lane is 12 feet wide, so that amount of roadway covers 164 square miles, or about 0.7% of West Virginia’s surface area. Repaving a road – with asphalt, AKA waste petroleum and dirt – costs about $125,000 per lane mile, or $55 million per square mile. And this is how affordable they find that:

potholes

Currently, a 600 square foot solar array costs about $55,000 installed, or $92 per square foot. At 5,280*5,280 that means that a square mile of solar array would be $2.56 billion. And the biggest red box on the graphic above would cost $64 trillion. Trillion.

That seems like a lot of money…and it is. It’s equivalent to ALL of US GDP for the next four years.

Put another way, global spending on all new electricity generating capacity will be about $400B in 2014, of which renewables are already 60%. In order to fill our red square with solar arrays, we would need to take the entire global capital expenditure on power plants, devote it to this one task…and keep doing it for the next 160 years.

By the way, the reason the red box is drawn in North Africa isn’t random chance: the energy figures supplied ONLY work for a place where there is almost never any cloud cover and that is close to the tropics. Which is fine, except that isn’t where the world needs electricity. Not only do we need to spend $64 trillion in the middle of the desert (the shipping and construction costs will not be cheap) but we now need to transport that electricity to the places where it will be used. That will be at least another $30 trillion, or 75 years of global spending.

Then there is the cost of storage: the electricity isn’t necessarily being generated at the time of day when it is needed. The (very) rough rule of thumb is that storage adds 25-50% to the cost.

We are now up to about $125 trillion, and I am being very optimistic on the costs.

Please don’t get me wrong. I am not saying that solar doesn’t work, or that it won’t have a larger role as a percentage of global electricity production. But I am saying that running the graphic with the headline “This map shows how little space we’d need to power the entire world with solar panels” is deceptive. It confuses people about energy and the real numbers, and in turn that will lead them to making bad choices in their lives and at the ballot box.

In 2014, being energy-literate is as important as being able to read, write, or do math.

 


Special thanks to my friend Byron Berry who started this conversation, and Brian Piccioni who helped with some of the research. All errors and opinions are purely my own, of course.

[Detail only for energy geeks. The chart that has been going around was originally from a 2005 Diploma Thesis for Nadine May at the Technical University of Braunschweig. The full thesis is here, and the map appears as Figure 12 on page 12. Despite the headline that accompanies the graphic talking about “powering the world with solar panels” May’s thesis was referring to solar thermal, not solar photovoltaic. However, since thermal costs 30-50% more than PV, all it means is that the idea is even less practical than the image would suggest. Which May knew full well: “These considerations only serve to point at the large potential of this energy resource and technology respectively. It should not give the impression to be the only option for the expansion of renewable energies.”]

 

 

How to Have the Most Romantic Anniversary Ever

2014-07-03 09.20.03

Marry each other all over again! Our first date was when Barbara invited me to have ‘tea and a chat’ on the afternoon of July 3, 2004, ten years ago today. It went so well that I came over to her house that evening and made her my shrimp and avocado salad.

As part of celebrating our anniversary, and to gather herself after the death of her father and our dog Sam, Barbara booked us at the Ventana Inn. We’ve been here before a few times, we love it and it feels like our spiritual home in North America. On the driveway up to the hotel, there is a sign that says “Wedding Site.” We’ve often joked that it would be romantic to get married there one day.

No witnesses, just an officiant. No photographer…she can take a few pictures with our smartphone. Flowers were roses from the Safeway at the Carmel Crossroads. We did get special wedding rings designed and made by our friends at Slashpile Designs.

The weather turned hot and sunny by afternoon, but at 9 am the fog was still with us: which made it even more romantic and typically Big Sur-ish.

It is a great thing to tell the woman you love that you would be willing to marry her all over again. It is even better to actually do it!

Thanks to Tara and Courtney at Slashpile, and to officiant extraordinaire Colette Cuccia for her customised ceremony and vows, and impromptu photo session. Our first activity on our honeymoon was a 12 mile hike up the local mountains with 2500 feet of vertical. Even more impressive, Barbara was wearing the wedding dress.

Sam Stewart RIP 2007-2014

Image

Sam was born in Picture Butte, Alberta 7 years and one month ago today. He was the grand-nephew of our previous Bernese Mountain Dog (Zeke) and we flew and drove out there to pick him up. He was the last of the litter to leave home, and he was kind of nervous to meet us. The fact that Hank (the breeder) had washed and blow-dried him to floofiness probably didn’t help, but in the picture below, that ain’t water on the deck! But you can see him looking up at me bravely, ready to make a new life.

Image

On the drive back to Calgary Airport we stopped in Vulcan, AB, where he had his first ever leash walk. Must have been scary for him, but he was brave again and did really well. I love this photo: Berners are very fuzzy from some angles, and we always thought this picture made it look like Barbara was walking a BEAR!

Image

Perhaps it was the early hair-dryer trauma, but he was often nervous at the dog groomers. They always sent back a report card: sometimes he was “a perfect angel” but most visits he was a “brave little bear.” We knew what that meant – he was doing his best, was nice to everyone, but you could tell it was maybe a little tough for him.

Image

He always came back pretty good looking: Barbara always called him the “Brad Pitt of dogs.” And not the scruffy more recent Brad: she was thinking the Thelma and Louise version!

Image

But being handsome is just genetics, and no one can take too much credit for that. But his inner self was his own making. I’ve known and owned many dogs, and they each have a core character. Some are puppies for ever, other are old and mature animals from early on. Sam was like a 3-7 year old kid: they know what the rules are, and they know about being good and bad. And they choose good – helping Dad chop green beans, smiling when going into surgery, playing with the other kids.

That was Sam: he always wanted to help, he wanted to be around the family, he was never cynical, and he lived each day in order to be a “good dog” and make his Mum and Dad (mainly Mum!) happy.

The lymphoma was only diagnosed three months ago. We tried chemo, and while the side effects were minimal, it didn’t make the tumours go away as it does in 75% of dogs. We tried another kind of chemo, and it didn’t help either: by yesterday afternoon he was having trouble breathing, walking, getting up, and had stopped eating. We talked to the vet, and gave all the people who mattered to him a chance to pay a last visit. My kids, the Neray dog-walkers, Uncle Jim.

But a funny thing happened after dinner last night. A very sick and dying dog came to life for 20 minutes of play. In our back yard, where the photo up top was taken, he chased his orange ball like a pup, barked up a storm and squeaked the ball madly, all with massive tail wags. An hour later he was having trouble breathing again, but I think those 20 minutes were HIS last gift to us. He knew we were sad for some reason, so he pulled up his white Berner socks, and had the courage and willpower to make us happy for a very little while.

This morning, just before noon, Sam had the catheter put in. We said goodbye, and held him with me patting his back. As the plunger was halfway down, his head dropped to his paws for the last time. After the vet listened and told us his heart had stopped, Barbara and our dog-walker (also Barbara) couldn’t stay in the room and left with the vet.

I leaned across him, nuzzled into his fur, and told him: “you really were a brave little bear.”

TV isn’t dead. But a very bad thing is happening.

Image

It’s silly to talk about the death of TV. Around the world, more people are watching more minutes and hours of traditional TV per day than last year. Even in North America, the rise of over the top video services like Netflix has had almost no effect on hours watched or pay TV subscriptions.

Before we go any further, it is important to note that young people have always not watched as much traditional TV as older viewers. 18-24 year olds are more active, and spend more time in conversations and social settings compared to those over 65. To give you an idea, all Americans over the age of 2 watched just over 37 hours per week of live and time shifted TV, while those over 65 watched nearly 54 hours per week, and 18-24 years olds clocked about 24 hours per week. 65+ watch 45% more than average, 18-24 are about 35% less, and those over 65 watch over TWICE as much as 18-24 year olds.

That’s not the problem. This table is:

Nielsen Q42014

In the last two years, while the US population as a whole is watching more TV (live and time shifted), 18-24 year olds are watching less.

It’s not a “big shift” as yet…only 5%. And it almost certainly varies by urban vs. rural; gender; and race.

But I have seen this movie before. I remember the newspaper industry in the 1990s. It had always been true that teenagers didn’t subscribe or even read newspapers. That was OK…once they moved out and got a job or went to college they started. Then one day, around about 1995, that changed, and we started seeing (at first) very small annual DECREASES in print newspaper consumption in 18-24 year olds. 20 years later, that trend has been working away, and we now have the situation where (in one UK survey) less than 0.5% of 16-34 year olds listed print newspapers or magazines in their top five media they would miss the most.

Ofcom1

For years now, the TV industry has been mopping its brow, and been thankful that what happened to newspapers doesn’t seem to have happened to it. I could be wrong, but I think we are starting to see the first signs of the ‘newspaperification’ of the traditional TV industry. Up until now we have had surveys, anecdotal data, and lots of arm-waving that young people are abandoning traditional TV, but not much real evidence. The empirical data is coming in now, and I think it is important.

But I need to be clear. Traditional TV remains a globally massive ($400 billion) industry that is growing in most regions and most age groups. It won’t go away just because some younger viewers are watching less. And there is no simultaneous TV equivalent of what Craigslist did to classified ads. Advertisers care about 16-24 year olds, but they also care about all the other demographics too.

This isn’t the death of TV. But it is important.

 

Follow

Get every new post delivered to your Inbox.

Join 25 other followers