Monday, February 19, 2018

Can Dating Apps Save Marriage?

On Valentine's Day last week, the Washington Post carried an article by their technology reporter Drew Harwell that looked into the dating and marriage situation in Silicon Valley—the area around and south of San Francisco where so many high-tech companies have clustered.  What he found was not good.  Despite the proliferation of dating apps with kooky names like Zoosk, Coffee Meets Bagel, and OkCupid, he talked to many single young people who are jaded about the whole idea of relationships between the sexes and the questionable usefulness of dating apps for forming them.

One problem the area has is the demographic preponderance of men.  Some zip codes around Palo Alto and vicinity have 40% more single men than single women.  To forestall a stampede of women out west to catch the next potential Bill Gates or Mark Zuckerberg (an old-fashioned idea to start with), the reporter cited a common saying among single women who already live in Silicon Valley:  "The odds are good, but the goods are odd."

With six or seven sixteen-hour days considered by many companies to be a standard workweek, it's understandable that young singles out there scarcely have time to sleep and take baths, much less develop a relationship with a potential life partner that could endure beyond the first date.  Unlike eating and sleeping, the sexual aspect of life is optional on an individual basis for the human creature, though universal neglect of this matter would lead to the demise of the species. 

Perhaps what we are seeing is a kind of specialization not unlike what happens among social insects such as bees and ants.  Reproduction is limited to the queen and a few necessary drones, while the vast majority are, well, worker bees to whom sex (and marriage in the case of people) is not a live option, so to speak.  I doubt, however, that Google and Apple would improve their chances of hiring the best and the brightest if they added a requirement of sworn celibacy to their employment requirements.

To those of a religious persuasion who see the norm for most people to be marriage and children, Silicon Valley is an anomaly where devotion to one's job trumps almost everything else.  But the idea of life as a giant winner-take-most competition seems to make sense to a lot of young people, and may explain the popularity of grim fiction such as The Hunger Games.  And it's understandable that the competitive feel would taint even such activities as seeking a mate, with women, especially, setting their standards for a suitable match impossibly high.  But requiring your next date to have the physique of a Superman and the bank statement of a billionaire is a good way to go a long time between dates.

And men don't always approach the problem realistically either.  Back in the 1980s, I knew a single man who most women would have considered highly eligible.  He eventually met a woman whom he fell in love with, but a few weeks before their wedding he expressed doubts to me:  "What if once I get married, somebody else comes along who's really the right one?"  I told him he couldn't be sure that wouldn't happen, but it didn't matter either.  He evidently figured out that marriage is a commitment more serious than any job, or career, or (for some) even life itself.  I am glad to report that they are still married, some thirty years later, so if he ever ran across another woman who might have ranked higher on an online dating score than his present spouse, he must have just kept going.

That couple met long before dating apps were invented, but this is not to say they can't be helpful.  A relative of ours, a widower whose wife died about four years ago, is now plannning his marriage to a woman he met through an online dating service on the first try.  They are from similar employment backgrounds and are both in their 50s, so it's not the young never-married situation that the Silicon Valley folks are typically in.  But it can work, certainly, under the right conditions. 

But a more fundamental problem results when someone expects an online service to transform one's whole life by means of bringing the ideal mate into it.  Some people, such as my friend from the 1980s and our relative, want to make a lifetime commitment.  The traditional Anglican wedding vows read in part, ". . . be faithful to [him or her] as long as you both shall live. . . ."  But many young people today have seen so many such commitments broken by people their parents' age that, while they may think a lifelong marriage is an appealing ideal to use in a romance novel, the chances of it working out in real life are so small that they don't even seriously consider it when they search for someone for a romantic involvement with.

The problem with this attitude is that it dooms whatever relationships they do form to temporary alliances, with both partners keeping one eye on the exit and looking for signs that things aren't working out, so as to leave before they are seriously hurt.  But guess what—even the briefest of encounters can leave lasting wounds, and often does.

As with many other forms of technology, dating apps can be helpful or harmful depending on the intentions with which they are used.  As many happily married couples who met through such an app can attest, they can play a role in increasing the net sum of human happiness.  Or, as many in Silicon Valley have found, they can hold out the illusion of hope for a happily-ever-after which runs aground when it encounters the unfavorable demographics of the region and the short-term mentality engendered by the competitive world of high-tech engineering. 

Especially for women, the problem of how to have both a rewarding career in engineering and how to have a satisfying and enduring marriage can be a hard one these days.  It's not easy for men either.  Dating apps may be part of the answer, but clearly, this is one problem that technology alone can't solve.

Sources:  The article "Why Silicon Valley singles are giving up on the algorithms of love" appeared on Feb. 14, 2018 in the online version of the Washington Post at  I also referred to statistics on marriage presented at  The Anglican Book of Common Prayer quotation is from  And I thank my wife of 39 years for pointing out the Post article to me. 

Monday, February 12, 2018

The Latest Amtrak Crash: A Deadly Combination

Many accidents in complex systems happen when two or more failures align like tumbler pins in a lock, opening the way to tragedy.  That is apparently what happened around 2:45 AM on Sunday, Feb. 4, outside the central South Carolina town of Cayce.  Here's what led up to the crash.

For the last several years, U. S. railroads have been under the federal gun to complete installation of Positive Train Control (PTC), a complicated system involving GPS receivers on trains, transponders along the tracks, and coordinated data links that will automatically slow down trains that are going too fast and stop those heading toward disaster.  Lack of PTC has been cited in every recent fatal train wreck, and so at the time of this crash, installers were working on the South Carolina section of track in question to put in the necessary PTC equipment.  The only trouble was, as part of the process they had to shut down the safety block signals—the red-yellow-green lights beside the track that inform the engineer as to whether the track ahead is clear. 

Railroads have a way of dealing with the absence of block signals, which is to dispatch trains by means of documents called "track warrants."  Obviously, there has to be a special procedure for this, with good communications by radio to the dispatcher, because running through an area with no signals is a little like flying an airplane blind.  It can take more than a mile to stop an average train, so by the time the engineer sees an obstruction on the track it's usually too late to do anything more than set the brakes, blow the horn, and hope.

At this writing, it is unclear whether the track-warrant procedure was followed correctly.  But what is clear is that earlier in the evening, after a railroad employee set a switch to allow a freight train to pull off to a siding out of the main line that the Amtrak train was going to use later, the switch was locked in place,  still set to the siding.  In other words, any train coming down the main line in the same direction was going to head straight onto the siding, toward the sidelined freight.

Normally, this switch setting would cause the signals on the main line to change to yellow or red.  But due to the work going on to install PTC, the signals were inoperative.  So all that stood between the southbound Amtrak train that was coming along about 2:45 AM and disaster was good communications among the person who set the switch, the train dispatcher (many miles away in a CSX control center, CSX being the freight railroad that owns the track which Amtrak uses), and the Amtrak crew.

The third thing that is clear is that the communications broke down.  The last thing the Amtrak engineer saw was the end of the freight train, as his engine barreled off the main line at 56 MPH onto the siding and crashed.  He and the conductor were killed, and about 100 passengers were injured in the resulting Amtrak car derailments, some critically.

Amtrak officials were quick to throw blame to CSX, whose tracks they were using, as it was CSX's responsibility to ensure that any switches their crew used were set back to the proper direction.  Records indicate that the freight-train crew reported that they had set the switch correctly, so it is unclear at this point how the switch ended up in the wrong position anyway. 

While this is only the latest in a string of several fatal Amtrak accidents, each one has apparently had a different set of contributing factors, and accusations that Amtrak's safety culture is at fault are premature, to say the least.

The irony of this particular accident is that it was apparently caused at least partly by the rush to install PTC—a safety feature—which indirectly led to the accident.  It reminds me of the recent Takata air-bag-inflator fiasco, in which millions of cars had to be recalled, and many people were killed by defective inflators that shot shrapnel at them in accidents that would have otherwise merely bent a few fenders.

This is not to say we shouldn't have airbags, or we should call a halt to installing PTC.  And here is where we fall back on a philosophical method which engineers use almost without thinking—utilitarianism, otherwise known as the greatest good for the greatest number.  Utilitarianism is not the only way to decide ethical issues, by any means, but it has its uses.  Clearly, it makes sense to complete PTC installations even if it means shutting down signals temporarily here and there.  But the problem comes when those responsible for safety measures get too focused on the future good they will do, and neglect the present potential harms such installations can cause.  I don't know what went wrong with the track-warrant system in this case, but clearly something did.  And once a decision is made to install a safety feature, it is easy to allow too many temporary compromises in present safety in view of the greater good that the ultimate installation will lead to.

But that temptation has to be resisted.  Takata shouldn't have been as sloppy as they were in making crummy airbag inflators that would turn into bombs down the road a few years.  And everyone involved—train dispatchers, PTC installers, and above all, the freight train crew who apparently left the switch in the wrong position—should have been doing a better job communicating in the absence of the usual track signals. 

Sometimes people who work on safety features get careless because most of the time, the features don't see action.  But they are really like a standing army ready for battle.  When the crisis comes, the safety features rise to the top of the priority list.  Never mind the usual function of the system—transportation, communication, or whatever.  If the user is injured or killed, it would have been better not to have made the product at all.  So although Amtrak's safety culture alone may not be at fault, clearly something went wrong in Cayce that night.  And more work needs to be done to make sure that a complicated system like a railroad runs even more safely with PTC than it does without it.  Just installing PTC won't guarantee that, because PTC itself has the potential to cause trouble.  Let's hope that it doesn't, and that the recent flurry of fatal train mishaps are the last ones before PTC makes train-passenger fatalities as rare as airline-passenger fatalities are today.

Sources:  I referred to a thorough report on the accident carried by NPR on their website on Feb. 5 at

Monday, February 05, 2018

Do Machines Determine Death?

Jahi McMath is legally dead in California, where a routine tonsillectomy on the thirteen-year-old girl went awry on Dec. 9, 2013 and she basically bled to death.  But she is still legally alive in New Jersey.  After refusing to let the California hospital harvest her organs, her family insisted she was still alive and moved her to New Jersey to take advantage of a law that allows them to do so.  Her case, described in a recent New Yorker article, raises serious questions about the role of technology in determining the end of human life. 

New Jersey and New York are the only states which allow families to refuse a diagnosis of brain death if it violates their religious beliefs.  This exception was made to accommodate the beliefs of Orthodox Jews, who believe that breathing indicates life.  Not so long ago, most people and governments would have said the same thing, but then medicine developed the ability to monitor brain function via electroencephalography (the EEG machine), as well as more sophisticated technologies such as MRI scans and automatic ventilator machines. 

These changes were reflected in a 1981 report written by a Presidential commission entitled Defining Death:  Medical, Legal, and Ethical Issues.  Modern ventilator machines can keep the rest of a human body functioning even after the brain is destroyed —for a time, anyway.  But the ability to detect brain function with EEGs, plus the increasing popularity of organ transplants (which stand a better chance of success if the organ is harvested from a donor whose systems are still functioning) led to a redefinition of death as cessation of activity in the whole brain.  Definitions are one thing, but decisions made under stressful actual conditions are another, especially in gray areas such as Jahi's.

In New Jersey, Jahi underwent a tracheotomy and had a feeding tube inserted.  Although she is still dependent on a ventilator, an MRI by a New Jersey brain researcher showed that parts of her cerebrum were intact.  The cerebrum is considered to be the seat of higher mental activity.  And there are videos showing that she can occasionally respond accurately to her mother's request to move certain fingers, as well as heart-rate changes when she hears familiar voices.  Because the legal limits on malpractice damages are capped at $250,000 but only if the victim dies, Jahi's parents are suing the State of California to bring about a trial in which a jury will determine whether Jahi is dead or alive in that state.

I find it fitting that the legal system in at least two states defers to religious beliefs on matters of death, because in doing so the law acknowledges that it doesn't perhaps know everything there is to know about this subject.  In dealing with death, we have to base our actions on some theory of what it involves.  And there are two distinctly different current narratives.

The first version is the secular narrative.  Human life is for purposes we can't discern and came about for reasons we can't figure out.  Human life on the whole is good, but utilitarian considerations of the greatest good for the greatest number tell us that if we use the criterion of brain death rather than more traditional definitions of death, organ transplants can benefit other people more.  And I see this point of view.  My brother-in-law is now doing very well, freed of the drudgery of thrice-weekly dialysis treatments, because he received a kidney transplant from a brain-dead accident victim last August.  And if Kansas hadn't been using the modern definition of brain death on the donor, doctors would not have been able to harvest that kidney.

The second version is the religious narrative, and because I'm most familiar with it, I'll give the Christian version.  God created the heavens, the earth, and all that is in them.  He created humans with the ability to sin, which they unfortunately took advantage of, and death entered the world.  But believers in Jesus Christ have overcome death and will rise with him at the general resurrection.  A person's spirit uses the brain, but brains are not necessary in order for a person to exist.  Angels and God Himself are personal beings, but they are not encumbered by brains.  There are testimonies from dozens, if not hundreds, of people who have had near-death experiences in which they have visited Heaven, and then returned to their bodies, some of whom probably met the criteria for brain death in the interim.  And if surgeons had started harvesting organs before they came back, well, that would have been the end of that.

There are both present and future reasons why Jahi's parents don't want her taken off the machinery that keeps her going.  One is the simple human desire to have your child with you.  We know each other through our bodies, and in a real sense, we are our bodies. To let a loved one's body cease to live and fall victim to decay is a final parting from that body which we have known and loved. 

The second reason is prospective:  the hope that Jahi might recover.  Medical science tells us that this is very unlikely in Jahi's case.  But broadly similar cases have resulted in the eventual recovery of the person involved.  In the magazine article's photo of Jahi on her bed in New Jersey, she is covered with a blanket that reads "I Believe in Miracles—Mark 11:24"  The reference is to the words of Jesus:  "Therefore I say unto you, What things soever ye desire, when ye pray, believe that ye receive them, and ye shall have them." I'm not going to presume to interpret that passage here, but the point is that the Christian virtue of hope sometimes leads people to do things that look ridiculous, wasteful, or even sacreligious to less hopeful people.

I don't know how Jahi's situation will end up.  But that is the point.  Sometimes even the best and most advanced technology won't tell us everything we want to know.  And in such cases, faith may be a better guide than technical expertise.

Sources:  The article by Rachel Aviv, "The Death Debate" appeared on pp. 30-41 of the Feb. 5, 2018 issue of The New Yorker.  I also referred to Wikipedia articles on brain death and Jahi McMath.

Monday, January 29, 2018

Facebook's Frankenstein Effect

The Frankenstein story, as so vividly penned by Mary Shelley in 1820, came at the dawn of the Industrial Revolution which brought the fruits of scientific knowledge to the masses.  Victor Frankenstein's sub-creation monster turns against him, and the scientist and inventor rues the day he brought it to life.

At a November 2017 conference in New York City sponsored by the Clinton Foundation, two inventors who were there at the creation of Facebook expressed similar regrets for what they had created.  In doing so, they became only the latest in a long series of technical types who have expressed various degrees of regret and guilt for creating new media such as radio, television, and Facebook.

Sean Parker served as the first president of the social-media giant Facebook, and when someone at the conference asked about the effects of Facebook on society, he recalled the thinking that went into the system's design.  His reply deserves quotation at length:

"You know, if the thought process that went into building these applications, Facebook being the first of them to really understand it, that process was all about, 'How do we consume as much of your time and conscious attention as possible?  That means that we need to sort of give you a little dopamine hit every once in a while, because someone liked or commented on a photo or post or whatever, and that's going to get you to contribute more content, and that's going to get you more likes and comments, you know, it's a social-validation feedback loop . . . It's exactly the kind of thing that a hacker like myself would come up with because you're exploiting a vulnerability in human psychology." 

Another speaker at the conference, a former Facebook developer, when asked if he had done some soul-searching concerning his role in the creation of Facebook, said, "I feel tremendous guilt. . . . I think we have created tools that are ripping apart the social fabric of how society works." 

Strong words.  In deploring what happened to their technically sweet ideas, these inventors and entrepreneurs remind me of the words of Lee De Forest, who invented the vacuum tube which made radio broadcasting possible.  In his later years, he became disgusted at what radio had become, and in 1940 wrote an open letter to the National Association of Broadcasters in which he protested, "What have you done with my child, the radio broadcast?  You have debased this child . . . "  Vladimir Zworykin, who developed the first practical electronic television system for RCA in the 1930s, had nothing good to say about what it had become by the 1970s, and rarely watched TV himself.  And Harold Alden Wheeler, a prolific radio and TV engineer and inventor, was well known for forbidding his family to watch TV at all.

What is it about engineers and software developers that makes them so sensitive to the negative impacts of their successful inventions?  After all, Facebook does a lot of good too, in connecting families and friends separated by geography and letting people keep in touch who otherwise might not.  In fact, some who deplore the parlous state of our public discourse in the era of Facebook flaming and Presidential tweets look back with fondness to those good old days when electronic news happened only once a day at 6 PM on only three TV channels, and everybody heard more or less the same thing, carefully filtered through professional media editors.  But that was the very same television programming that Zworykin and Wheeler deplored.

People who imagine things before they are created have to believe in them strongly, and believe that their creations will do some good—will do at least themselves good, and also perhaps other people as well.  Only Sean Parker knows exactly what was going on in his mind when he cooked up first Napster and then contributed to the beginnings of Facebook.  But by his own testimony, he was basically hacking the human brain—taking advantage of the little squirt of dopamine most people get when they see that someone out there has acknowledged their existence positively, by sending an email, text, or a "like" on Facebook.  Multiply those squirts by the millions every day, and there is the psychological engine that drives Facebook and most other social media.

By some standards, Sean Parker has nothing to complain about. He doesn't feel so guilty about Facebook that he has divested himself of the several billion dollars it has earned him.  But it is rare to find people who have both devoted years of their lives to becoming technically proficient in a narrow field, and who can also take a wise, broad view of all the potential effects of their technical developments, both positive and negative, before they are developed.  So when an idea of theirs takes wings and flies away like Facebook did, and in the natural course of events gets some people into trouble, they are disappointed, because they only imagined the good things that would happen as a result, not the bad things. 

Any technology that is used by a large enough number of people is going to be used badly at some point, because the only Christian doctrine that is empirically verifiable is going to come into play:  the doctrine of original sin.  The culpability of the technology's developers depends on what they were trying to do to begin with.  Wanting to connect people, and even getting rich, are not necessarily bad motives.  But once the technical cat is out of the bag, inventors can at least try to do what they can to mitigate the harmful effects of their technologies.  After Alfred Nobel learned that what he would mostly be remembered for was the death and destruction wrought by his invention of dynamite, he hastily set up the Nobel Prizes partly as a kind of penance or compensation to humanity for the evil that his invention had done. 

In 2015, Parker set up the Parker Foundation, a charitable organization whose focus includes civic engagement.  Perhaps by this means, Parker and others like him can try to repair some of the social damage they see Facebook and other social media doing.  The Nobel Prizes did not put an end to war, and I don't expect the Parker Foundation is going to lead on its own to a new era of sweetness and light in public discourse.  But at least he's trying.

Sources:  Recordings of the interviews from which the two quotations from Parker and his associate were taken are available on the website, approximately at minutes 16 through 20.  A web report citing the same interview with Parker can be found at  The information about Zworykin is from  Lee De Forest's words on radio broadcasting can be found in the Wikipedia article on him.  I also referred to the Wikipedia articles on Sean Parker and Harold A. Wheeler.

Monday, January 22, 2018

Can Artificial Intelligence Make Art?

In February's Scientific American, technology columnist David Pogue wonders if human artists and composers should start worrying about a new development in artificial intelligence (AI):  the automated composition of music and production of paintings.  Computer scientist Ahmed Elgammal's Art and Artificial Intelligence Lab at Rutgers University is developing algorithms that start with well-known famous works of real art and abstract elements of style and composition from them.  Then the machine can either be set to do imitations in the same style, or else he turns on a "style ambiguity signal" that forces the digital Rembrandt to deviate from the style it's learned. 

I have viewed some of these products on the lab's website, and while I make no claims to be an art critic, my impression is that Rembrandt doesn't have much to worry about.  In fairness to Elgammal, he doesn't claim that what his system is doing is just as good as human-created art.  Rather, he sees himself as exploring theories of art creation using AI to see what happens if style rules are either slavishly followed or intentionally broken.

When he mixed the products of his AI "artist" with works by actual humans in a couple of different sets—abstract expressionism and contemporary art from a recent European art show—he found that people who viewed the artworks without knowing which was by a human and which was by a computer, often picked the computer-generated ones as more "intentional, visually structured, communicative, and inspiring" than those made by unaided humans.  He was surprised by this outcome, but he shouldn't have been.

Most people will agree that much visual art that is bought and sold for millions of dollars today doesn't look much like the artworks that were painted, say, a hundred and fifty years ago. Elgammal has happened to come along with his AI artist at a time when the so-called standards for what constitutes art are all but nonexistent.  Last year The New Yorker carried a story about a young man named Jean-Michel Basquiat who mainly wanted to be famous.  He tried music as a path to fame first, but was discouraged by the fact that it takes years of practice to become even an adequate musician.  So he switched to art.  Starting with graffiti, he attracted the attention of the art world, rocketed to international fame, and died of a heroin overdose at the age of 27.  The magazine's art critic Peter Schjeldahl thinks that his art is worth looking at, but probably not worth paying $110 million for, as a Japanese business man did last May for a Basquiat work from 1982.  Schjeldahl himself described it as a "medium-sized, slapdash-looking painting of a grimacing skull."  Judging by the photograph in the article, that's a pretty accurate description.

My point is that what passes for art these days is a departure from what has passed for art in the past, well, several thousand years.  Up until the nineteenth century, artistic works represented both recognizable objects, and also the higher operations of the human mind and spirit, operations that distinguish human beings from the lower animals.  G. K. Chesterton regards the production of art as one of the primary distinctions between people and other animals, and points to the cave paintings such as those in Lascaux, France, as being evidence that those who painted them were humans like us. 

One chronic concern that arises as AI advances into more areas of endeavor formerly regarded as exclusively human, is that when AI starts to do a certain kind of thing better and cheaper than people, what will happen to the people who earn their living doing it now?  So far, humanity has survived the replacement of telephone operators by automatic dialing, elevator operators by pushbutton elevators (everywhere except at the United Nations building, I'm told!), and more recently, the advance of AI into the professions of engineering, medicine, and even law.  Right now, the unemployment rate in the U. S. is at a historic low, but that is due mainly to an economy that is close to overheating, and doesn't take into account the millions of people who neither look for work or are particularly troubled that they're not working.  And here is where we find the real matter to be concerned about.

The issue isn't whether AI will send some artists to the unemployment line.  The real issue is how we regard art and how we regard humanity.

When Chesterton wrote in 1925 that "Art is the signature of man," he didn't mean just any random scrawl.  He had a particular thing in mind, namely, that the portrayal of nature as interpreted by the human spirit is unique to man.  Certainly no other animal produces anything that is generally regarded as a work of art.  I am aware of the bowerbirds of Australia and New Guinea which construct large elaborate arches of sticks and decorate them with blue objects and sometimes even paint the walls.  But this is simply instinctive behavior directed at attracting a mate.  No one has seen bowerbirds exchanging worms for a particularly fine bower and signing bills of sale. 

If people today can't seem to tell the difference between computer-generated art and human-generated art, the reason isn't that the computer is now as artistic as a human artist.  The problem is that artists have degraded their craft to the level of a machine-made product, and taught the general public that yes, that is indeed art even if I tied brushes to two turtles and let them crawl across the canvas.  When Marcel Duchamp tried to exhibit an ordinary urinal as art in a 1917 New York art show, the show's committee rejected it, but photographer Alfred Stieglitz allowed him to put it up in his studio.  In 2004, 500 "renowned artists and historians" reportedly selected this work, called simply "Fountain," as the most influential artwork of the twentieth century.  And it was made by a machine.

Sources:  David Pogue's column "The Robotic Artist Problem" appeared on p. 23 of the February 2018 issue of Scientific American.  Some creations of Prof. Elgammal's AI artist can be viewed at  (The ones with "style ambiguity" turned off, at, are truly creepy.) A brief introduction to Jean-Michel Basquiat can be found on the New Yorker website at  Chesterton's comments about cave paintings are from pp. 30-34 The Everlasting Man, reprinted in 2008 by Ignatius Press (originally published 1925).  And I also referred to Wikipedia articles on Marcel Duchamp and Fountain. 

Monday, January 15, 2018

Russian Interference in Elections: Fancy Bear is Not Exactly What We Had in Mind

Excuse the long title, but whenever humorist Roy Blount Jr. would run across something totally contrary to his expectations, he would say mildly, "Well, that's not exactly what I had in mind."  By a convoluted series of circumstances, we in the U. S. have become vulnerable to election interference by a foreign power in a way that few people anticipated.  This is a lesson in how novel technologies and aggressions can outwit both legislators and organizations dedicated to preventing such aggressions.  And novel countermeasures—some of them possibly costly in both money and convenience—may be needed to deal with them.

Historically, it has been difficult for non-U. S. citizens or foreign countries to interfere with U. S. elections.  While the fear of such interference has always been present to a greater or lesser degree, my amateur historical memory does not bring to mind any significant cases in which a foreign power was clearly shown to have acted covertly in a way that provably influenced the outcome of a national election.  Laws prohibiting foreign campaign contributions acknowledge that the danger is real, but if such interference happened in the past, it was so well concealed that it never got into the historical record. 

Ever since there were governments, there have been privileged communications among those in power which, if disclosed in public, might prove to be embarrassing or even illegal.  But until recently, these communications took place either by word of mouth, by letter and memo, or by phone.  And considerable espionage work has to be done to intercept such communications.  You have to have a spy or a listening device in place to overhear critical private discussions.  You have to steal or secretly photograph written documents, and you have to tap phone lines.  All of these activities were by necessity local in nature, meaning that a foreign power bent on obtaining embarrassing information that could sway an election had to mount a full-scale espionage program, with boots on U. S. soil, and take serious risks of being caught while engaged on a fishing expedition that might or might not reveal any good dirt, so to speak. 

Then came the Internet and email.

While much email physically travels only a few miles or less, it passes through a network in which physical distance has for all intents and purposes been abolished.  So if I email my wife in the next room, somebody in Australia who simply wants to know what I'm emailing can try to hack into my emails and, if successful, can find out that I'm asking her to get crunchy raisin bran at the store today.  Nobody in their right mind would bother to do such a thing, but the Internet and email have made it hugely easier to carry out international spying on privileged communications of all kinds.  The kinds of spying that used to be done only in wartime by major powers can now be done by a few smart kids in some obscure but hospitable country.  And here is where Fancy Bear comes in.

A private security firm in Japan has discovered signs that the same group probably responsible for hacking the Democratic Party's emails during the 2016 elections is trying to mess with the Congressional elections coming up later this year.  An elaborate mock-up of the internal Senate email system has been traced to this so-called Fancy Bear group, which evidently has ties to Russia.  Such a mockup would be useful to entrap careless Senate staffers who might mistakenly reply to an email that looks legitimate, but is in fact a kind of Trojan horse that would allow the Russians (or their minions) access to all further emails sent through what looks like a legitimate site, but is in fact a trap. 

I am not a cybersecurity expert and won't speculate further on how the Fancy Bear people do their dirty work.  But the fact that they are still out there working to steal emails and release them at times calculated to throw U. S. elections one way or the other, brings to mind two things that we need to consider.

1.  Messing with electronic voting is not the main cyber-threat to our election system.  Much concern has been expressed that electronic voting systems are not as secure as they should be.  While this is probably true, it doesn't appear to be a significant problem that has actually resulted in thrown elections, except perhaps in small elections at the local level, and usually by accident rather than by design.

2.  We may have to trade some Internet freedom for security in guarding U. S. elections against foreign interference.  The moral innocents who designed the Internet back in the 1970s made the mistake of assuming that everybody who would use it was just like themselves, or rather, their polished-up image of themselves:  sincere, forthright, open, and filled with only good motives.  One wishes that the concept of original sin had been included in every computer-science curriculum since the discipline began in the 1960s, but that isn't the case.  The radically borderless and space-abolishing nature of the Internet brings foreign threats and interference to everyone's doorstep.  With the click of a button in Uzbekistan, Maude in Indianapolis can read the latest fabricated scandal on Facebook about the guy she was thinking of voting for, or hear on the news that his private emails to his mistress have been posted on Wikileaks. 

Not that I condone elected officials who have mistresses.  But these are examples of the kinds of things that can go on once everybody routinely uses a medium which, under present circumstances, is about as private as yelling your credit card number to somebody on the other side of Grand Central Station.

To make email as secure as the U. S. Postal Service, we obviously require more rigid and well-organized security protocols than we have had up to now.  My own university has recently gone to a two-step verification system that is inconvenient, but greatly heightens the security of certain privileged communications such as entering grades.  It may be time for everyone concerned in elections—political parties, governments, and private citizens—to agree to some kind of inconvenient but more secure email approaches, applied uniformly with government regulation if necessary, so that we can get back to where we were in terms of preventing outsiders from interfering with our most characteristic action as a democracy—electing those in power.

Sources:  The AP report by Raphael Satter "Cybersecurity firm:  Senate in Russian hacker crosshairs" was published on Jan. 12 and carried by numerous papers, including the Washington Post at 

Monday, January 08, 2018

Meltdown and Spectre: Sometimes the Good Guys Win

Most computer viruses and bugs go for particular operating systems, Windows being the most popular, because it's on the majority of PCs.  So Mac users, although occasionally suffering their own kinds of attacks, usually breathe a sigh of relief every time a major PC-only virus hits the news. 

But over the weekend, you may have heard about a pair of bugs called Meltdown and Spectre that go for hardware, not software.  In particular, Meltdown is a vulnerability associated with Intel processors made since 1995, and the dominance of Intel means Macs, PCs, and most you-name-it computers are potential targets.  Spectre reportedly is even worse.  But the key word here is "potentially."  In an announcement, Apple claimed that no known malicious hacks have actually been committed using either of these bugs.  And by the time the general public learned about them, the major computer and software makers were already well on their way to devising fixes, although the fixes may have their own drawbacks.

The reason no bad guys have apparently used these bugs is that they were discovered independently by computer researchers in Austria and the United States.  And following a policy called "responsible disclosure," the researchers notified Intel that their chips were vulnerable to these bugs.  So until now, apparently the criminal elements of the computer world either didn't know of the bugs or didn't use them.

I am not a computer scientist, but the technical details of how Meltdown happens are interesting enough to try to summarize.  Apparently, some years back chip designers started doing certain things to speed up the use of what is called "kernel memory."  If you think of the kernel as a little homonculus guy (call him the Kernel) sitting in the control room doing the computer math, the trick they were playing with the Kernel's memory amounts to having other homonculus-people in the room guess at what the Kernel's going to want to do next, and bring stuff out of memory so it can be waiting for him when he needs it.  And all this stuff has to be secure from outside spying, so there's even security checks done way inside the control room there. 

But Meltdown evidently exploits some little timing gap between the moment the contents of memory get there and the moment they are certified as secure.  It's like some spy taking a picture of the secret document during the few seconds between its arrival in the room and when it's put into the "Top Secret" box.  I'm sure some computer scientists are having a good laugh at my pitiful attempt to describe this thing, but that's the impression I got, anyway.

So there are two ways to fix it:  redesign the hardware or write a software patch and put it in upgrades.  Obviously, if you're running older hardware, you're not going to rip out your Intel processor and put in the new one once they've designed the flaw out of it.  So the only practical thing right now is installing software fixes, which evidently will be included in standard operating system upgrades for PCs and Macs. 

Realistically, though, it appears that actually using these bugs to steal data is very tricky, and that is probably why nobody has discovered evidence that they've ever been used maliciously.  But even if they haven't, everybody knows about them now, and so theoretically a non-upgraded Mac could be spied on without a trace.  I'll put upgrading my OS on my to-do list for the new year, anyway.

This whole episode puts a highlight on the question of what computer researchers do when they discover flaws that no one else had suspected.  We can be grateful that Daniel Gruss and his colleagues at Austria's Graz Technical University, and Jann Horn at Google's Project Zero, who independently discovered the bugs as well, did the responsible thing and informed Intel and company of the problems as soon as they found they could be exploited. 

But it's not that hard to imagine what might have happened if some criminal groups, or worse, a state bent on cyber-warfare, had discovered these flaws first.  There are countries where both highly advanced computer science research is going on, and where researchers would be encouraged not to notify the manufacturers in the U. S., but to inform their government's military of such discoveries for use in future cyberattacks.  It's a little bit like thinking what World War II would have been like if Hitler hadn't chased away most of Germany's leading nuclear physicists, and he had gotten hold of nuclear weapons before the Allies did.

Recently I saw "Darkest Hour," the film about Winston Churchill during the crucial days in May of 1940, as Hitler's armies were overwhelming continental Europe and Churchill accepted the post of Prime Minister of England.  Things looked really bad at the time, and many powerful people advised him to give up the fight as hopeless and settle with Hitler before all was lost.  But needless to say, Churchill made the right decision and rallied Parliament with his famous speech in which he declares "We shall never surrender."

It's easy to get all nostalgic over times when issues were more clear-cut, and the only kinds of military threats were physical things like guns, airplanes, and bombs.  Not that World War II was a picnic—it was the worst self-inflicted cataclysm humanity has devised so far.  And tragic times make heroes, as World War II made a hero of Churchill and millions of otherwise ordinary people who lived through that extraordinary time.

But we have similar heroes working among us even today.  For every researcher and scientist who worked on nuclear weapons, radar, or other advanced military technologies back then, we have people like Gruss and Horn now who discover potential threats to the world's infrastructure and turn them over to those who will mitigate them, not exploit them for evil ends.  So here is a verbal bouquet of thanks to both them and other computer wonks who use their discoveries for good and not evil.  May their tribe increase, and may we never have cause to watch a future reality-based movie about how some nasty computer virus killed thousands before the good guys figured out how to stop it.

Sources:  I referred to articles on Meltdown and Spectre carried on the BBC website at and a report on describing how the bugs were discovered at, as well as the Wikipedia article "Meltdown (security vulnerability)."