The Black Liszt

How to Integrate AI and ML with Production Software

Most enterprises that build software are proudly flying the flag of AI/ML. "We're technology leaders!" their leaders crow in annual reports and at conferences. At the same time, any objective observer usually sees a lack of common sense in the operation of the company's systems. It often appears that, far from needing beyond-human artificial intelligence, they could use some insect-level functioning instincts that get things done. What's going on? Can it be fixed?

The Industry-standard way to fix the problem

The usual fix to the problem is to completely ignore the fact that there's a problem in public, while following something like these proven strategies:

  • Brag, loudly and often, about you and your organization's commitment to AI/ML. The commitment is serious; it's deep and it's broad!
  • Talk about the initiatives you've funded and the top experts you've hired.
  • Talk about the promising things you've got in the works.
  • Use extra phrases to demonstrate your seriousness, things like "1-to-1 personalization" and "adaptive processes" and "digital-first transformation."
  • Put your top executives with fancy titles out there to follow the same strategy, using their own words.

I've given a detailed example of how a top healthcare insurance company follows this strategy while operating at a sophistication level that is best described as "hey, this electronic mail stuff sounds neat, let's give it a try."

Sometimes one of these organizations puts something in practice that works. It typically takes a great deal of time and effort to find and modify the relevant production systems. The efforts that are mostly likely to make it into production are those that can be done with the least amount of modification. For example, minimal-effort success can sometimes be achieved by extracting data from production systems, subjecting it to AI/ML magic and then either feeding a new system or making it effective with just a couple of insertion points.

The Obstacles to AI/ML Success

The obstacles to AI/ML success have two major aspects:

  • The typical practice of leap-frogging all the predecessors to AI/ML to maximum sophistication.
  • The extensive, incompatible existing production systems into which AI/ML power has to somehow be inserted.

A good way to understand these obstacles is to imagine that you're in a world in which boats are by far the most important means of bulk transportation. In other words, the world in which we all lived at the start of the 1800's. Suppose by some miracle a small group has invented nuclear power and has decided it would be a great way to provide locomotion to large boats instead of the sails and wind power then in use. What prevents the amazing new technology from being used?

Easy: the boats were designed for sails (with masts and all that) and have no good place to put a nuclear engine, and no way to harness its power to make the boat move.  The strong steel and other materials required to make a turbine and propellers doesn't exist. You can demonstrate the potential of your engine in isolation, but making it work in the boats available at the time won't happen. You can spend as much time as you like blaming the boats, but what's the point?

The solution is clear by studying boat locomotion: there were incremental advances in boat materials and design, and the systems used for powering them. Paddle wheelers have been around for over a thousand years. Here's a medieval representation of a Roman ox-powered paddle wheel boat.

De_Rebus_Bellicis _XVth_Century_Miniature

For serious ocean travel, the choice became the large sail boat, as in this painting of boats near a Dutch fortified town:

2022-04-06 15.58.34

Suppose you had a nuclear engine of some kind and were somehow able to make it with materials that were generally not available in the 1600's. How would you use it to power the sail boat? The very thought is ridiculous. The problem is that the boats have no way to accept or utilize the nuclear engine.

How to overcome the obstacles to AI/ML

What would a sensible person do? Exactly what real-life people did in history: incrementally make boats suitable for more powerful means of locomotion, and make more powerful means of locomotion that would make boats go more quickly. Practically. You know, in real life.

That means, among other things, once steam power was created, gradually make it suitable for powering ships with sails -- using the sails to conserve coal when the wind was strong, and using coal to power paddles when the wind wasn't blowing. Then, after materials advanced, invent the screw propeller -- which didn't happen until the late 1800's -- to make things even better. Eventually, the engine and the ship would converge and be suitable for the introduction of nuclear power.

This is an excellent model for understanding how to overcome the obstacles to powering existing enterprise applications with AI/ML:

  • The AI/ML can only be jammed into existing systems with great effort and by making serious compromises.
    • With a few exceptions, simpler methods that can make real-life improvements should be devised and introduced first, with the portion of AI/ML gradually increasing.
  • The existing enterprise applications are like wooden sailing ships, into which generation-skipping advanced locomotion simply can't be jammed.
    • Evolve the applications with automated decision-making in mind, first putting in simple methods that will produce quick returns.
    • The key to AI-friendly evolution is to center the application architecture on metadata in general, and in particular with metadata for workflow.

The important thing is this: increase the "intelligence" of your applications step by step, concentrating on simple changes for big returns. Who cares whether and to what extent AI/ML is used to make improvements? All that matters is that you make frequent changes to improve the effectiveness, appropriateness and personalization of your applications. Experience shows that relatively simple changes tend to make the greatest impact. See this series of posts for more detail.

 

Posted by David B. Black on 06/28/2022 at 10:05 AM | Permalink | Comments (0)

The Facts are Clear: Hypertension is not a Disease

The medical community, organizations and government agencies couldn't be clearer: hypertension (high blood pressure) is a silent killer. You may not feel anything wrong, but if you've got it, your risk of strokes and heart failure goes way up. Therefore it's essential to monitor and treat this deadly condition.

They're all wrong. Hypertension is not a disease that needs to be cured. It may be a symptom of a problem, but not a problem itself, just like fever is a symptom, not the underlying problem. By treating it as a disease and giving drugs to lower blood pressure, the medical establishment makes patients less healthy and raises costs substantially. With a few exceptions, we would all be better off ignoring blood pressure and most of the associated advice.

Drugs for "Curing" Hypertension

The single most prescribed drug in the US is for lowering cholesterol. But most prescriptions for a disease are to reduce blood pressure.

Screenshot 2022-04-23 152522

Here's the story with blood pressure pills.

In fact, a majority of the most prescribed drugs in the U.S. are used to treat high blood pressure or symptoms of it. That’s because 108 million or nearly half of adults in the U.S. have hypertension or high blood pressure.

Is Hypertension a Disease?

There is no doubt that blood pressure can be measured and that it varies greatly. What is hypertension? As I describe here, currently it's a systolic pressure reading above 120 (until 2017 it was above 140). There are lots of things you can measure about people. What makes this measurement bad?

There's a clue buried deep in Doctor-language, a clue that is nearly always missed -- but it's one that doctors with a basic education should know. The official name for high blood pressure is essential hypertension. What's that? Let's ask Dr. Malcolm Kendrick, a long-experienced cardiologist:

At medical school we were always taught – and this has not changed as far as I know – that an underlying cause for high blood pressure will not be found in ninety per cent of patients.

Ninety per cent… In truth, I think it is more than this. I have come across a patient with an absolute, clearly defined cause for their high blood pressure about five times, in total, and I must have seen ten thousand people with high blood pressure. I must admit I am guessing at both figures and may be exaggerating for dramatic effect.

Whatever the exact figures, it is very rare to find a clear, specific cause. The medical profession solved this problem by calling high blood pressure, with no identified cause, “essential hypertension”. The exact definition of essential hypertension is ‘raised blood pressure of no known cause.’ I must admit that essential hypertension certainly sounds more professional than announcing, ‘oh my God, your blood pressure is high, and we do not have the faintest idea why.’ But it means the same thing.

Hypertension = your blood pressure number is high. Kind of like having a high temperature, which we call a "fever," right? Wrong. When you get a fever, doctors first make an effort to determine the cause of the fever! What an idea! The fever is a clue that something is wrong, not the problem itself! Here's the real, bottom-line clue: When you treat fever you treat the underlying cause e.g. bacterial infection, NOT the fever itself! If we treated fever the way we treat hypertension, we would give drugs whose sole purpose was to lower the body temperature, ignoring the underlying bacterial infection that caused the fever. Wouldn't do any good! Maybe we'd sweat less, but the bacteria would rage away inside our bodies. But high blood pressure? Doctors ignore the cause and "treat" the symptom, which can often do more harm than good -- except of course for the drug makers, who make out just fine.

Makes me sick.

Causes of hypertension

From Kendrick:

So, why does the blood pressure rise in some people, and not in others. It is an interesting question. You would think that, by now, someone would have an answer, but they don’t. Or at least no answer that explains anything much.

Just as fever is caused by an infection (or something else), could it be possible that hypertension results from some underlying problem? Kendrick again:

Looking at this from the other direction, could it be that cardiovascular disease causes high blood pressure. Well, this would still explain why the two things are clearly associated, although the causal pathway may not be a → b. It could well be b → a.

I must admit that I like this idea better, because it makes some sense. If we think of cardiovascular disease as the development of atherosclerotic plaques, leading to thickening and narrowing of the arteries then we can see CVD is going to reduce blood flow to vital organs, such as the brain, the kidneys, the liver, the heart itself.

These organs would then protest, leading to the heart pumping harder to increase the blood flow and keep the oxygen supply up. The only way to increase blood flow through a narrower pipe, is to increase the pressure. Which is what then happens.

Over time, as the heart is forced to pump harder, and harder, the muscle in the left ventricle will get bigger and bigger, causing hypertrophy. Hypertrophy means ‘enlargement.’ So, in people with long term, raised blood pressure, we would expect to see left ventricular hypertrophy (LVH). Which is exactly what we do see.

He goes on to give lots of detail about how this takes place, if you're interested.

Correlation and Causation

There's a little problem that everyone who knows about science and statistics is supposed to know. It's the difference between correlation and causation. Two things seem to happen at the same time. They are correlated. No problem. But does one of the cause the other? That's a whole other thing, and it's super-important. At McDonald's, burgers and fries are often seen together. They're correlated. Did the burger cause the fries? Fries cause the burgers? Nope. They're just listed together on the menu and lots of people like them together.

How about knife cuts and bleeding? Definitely correlated. Causation? By looking at repeated cases of knives making cuts, you can determine that putting a knife into someone's skin nearly always causes bleeding.

This is the problem at the heart of hypertension -- except perhaps in extreme cases, hypertension can be correlated with heart attacks and strokes -- but it can't be shown to cause them in the vast majority of cases.

The range of blood pressure

The authorities don't like to talk about this, but blood pressure varies HUGELY not just from person to person, but also by age and for a single person during the day!

Here's something to give you the idea from a scientific paper:

Screenshot 2022-05-26 154740

The range of pressure for a single person can be rather larger. I just took my pressure this morning. The systolic was 126. In the previous days the readings were 159 and 139.I have taken my pressure with different devices over a year, and that variation is not unusual. It can vary that much in a couple hours, depending on my activity level.

It is well-known in the medical community that blood pressure varies naturally with age, generally rising as you get older. Has anyone documented this statistically? If they have, I can't find it. Generally, what is normal is roughly 100 plus your age, so a 50 year old man would have 150, roughly 10 less for women. Here is an interesting description of the age factor from a former NASA astronaut and doctor.

The assumed causation fails to hold

A surprising amount of modern medical misinformation goes back to the diet-heart hypothesis put forward by Ancel Keys and supported by the seven countries study. It's what led to the obesity-causing fat-is-bad diet recommendations and the ongoing harm of reducing blood cholesterol using statins. Out of the same witch's brew came the notion that high blood pressure causes heart disease.This notion was supposedly locked down by the famous Framingham study, which continues to this day.

In the year 2000, the edifice crashed when a careful review was published in the journal of the European Society of Cardiology, "There is a non-linear relationship between mortality and blood pressure." It includes references to the original Keys study and many following journal articles.

The article is prefaced by a quote that is so appropriate, I can't help but share it with you:

"For every complicated problem there is a solution that is simple, direct, understandable, and wrong." H. L. Mencken

The authors start by explaining the current paradigm:

"the relation of SBP (systolic blood pressure) to risk of death is continuous, graded and strong..." The formulation of this "lower is better" principle ... forms the foundation for the current guidelines for hypertension.

They point out that Ancel Keys himself concluded that "the relationship of overall and coronary heart disease death to blood pressure was unjustified."

They went on to examine the detailed Framingham study data.

Shockingly, we have found that the Framingham data in no way supported the current paradigm to which they gave birth.

...

Systolic blood pressure increases at a constant rate with age. In sharp contrast to the current paradigm, we find that this increase does not incur additional risk. More specifically, all persons in the lower 70% of pressures for their age and sex have equivalent risk.

Cardiologist Kendrick in his recent book Doctoring Data points out

Has this paper ever been refuted? No, it has not. Sadly, it was given the worst possible treatment that can be dished out by the medical establishment. It was completely ignored.

...

The benefits of blood-pressure lowering, whatever the level, became so widely accepted years ago that it has not been possible, ethically,[viii] to do a placebo-controlled study for a long time. I am not aware of any placebo-controlled trials that have been done in the last twenty years, or so.

A bit of sanity

The same year (2017) the AHA and cardiologists were lowering the target blood pressure for everyone from 140 to 120, a group representing family physicians published an official guideline for treating hypertension in adults age 60 and over. Their method was rigorous, taking into account all available studies. Here is their core recommendation:

ACP and AAFP recommend that clinicians initiate treatment in adults aged 60 years or older with systolic blood pressure persistently at or above 150 mm Hg to achieve a target systolic blood pressure of less than 150 mm Hg to reduce the risk for mortality, stroke, and cardiac events. (Grade: strong recommendation, high-quality evidence).

What a breath of fresh air! And completely in line with this data-driven review that showed that a large number of people taking anti-hypertensive drugs just 1 in 125 were helped (prevented death), while 1 in 10 were harmed by side effects. Also in line with this careful study of people with elevated blood pressure in the range of 140-160; the study showed that none were helped by drugs, while 1 in 12 were harmed.

BTW, if you're not familiar with the concept of NNT, you should learn about it. It's crucial.

Hypertension Drugs can hurt you

Doctors dish out hypertension drugs like candy. It's often the case that two different kinds of drugs will be required to get your blood pressure to "safe" levels. For reasons that don't seem to be studied, it's rare indeed for doctors to mention side effects; yet in repeated studies, the generally data-suppressing researchers can help but mention that the side effects are so bad that roughly 10% of study participants drop out of the study! (See above for references.)

There are good lists of side effects at Drugs.com. Here's some information about Amlodipine:

Side effects requiring immediate medical attention

Along with its needed effects, amlodipine may cause some unwanted effects. Although not all of these side effects may occur, if they do occur they may need medical attention.

Check with your doctor immediately if any of the following side effects occur while taking amlodipine:

More common

  • Swelling of the ankles or feet

Less common

  • Chest tightness
  • difficult or labored breathing
  • dizziness
  • fast, irregular, pounding, or racing heartbeat or pulse
  • feeling of warmth
  • redness of the face, neck, arms, and occasionally, upper chest

Rare

  • Black, tarry stools
  • bleeding gums
  • blistering, peeling, or loosening of the skin
  • blood in the urine or stools
  • blurred vision
  • burning, crawling, itching, numbness, prickling, "pins and needles", or tingling feelings
  • chest pain or discomfort
  • chills
  • cold and clammy skin
  • cold sweats
  • confusion
  • cough
  • dark yellow urine
  • diarrhea
  • dilated neck veins
  • dizziness or lightheadedness when getting up from a lying or sitting position
  • extra heartbeats
  • fainting
  • fever
  • itching of the skin
  • joint or muscle pain
  • large, hive-like swelling on the face, eyelids, lips, tongue, throat, hands, legs, feet, or sex organs
  • numbness and tingling of the face, fingers, or toes
  • pain in the arms, legs, or lower back, especially pain in the calves or heels upon exertion
  • painful or difficult urination
  • pale, bluish-colored, or cold hands or feet
  • pinpoint red or purple spots on the skin
  • red, irritated eyes
  • redness of the face, neck, arms, and occasionally, upper chest
  • redness, soreness or itching skin
  • shakiness in the legs, arms, hands, or feet
  • slow or irregular heartbeat
  • sore throat
  • sores, ulcers, or white spots on the lips or in the mouth
  • sores, welting, or blisters
  • sudden sweating
  • sweating
  • swelling of the face, fingers, feet, or lower legs
  • swollen glands
  • trembling or shaking of the hands or feet
  • unsteadiness or awkwardness
  • unusual bleeding or bruising
  • unusual tiredness or weakness
  • weak or absent pulses in the legs
  • weakness in the arms, hands, legs, or feet
  • weight gain
  • yellow eyes or skin
Then there are the ones judged to be less severe:

Side effects not requiring immediate medical attention

Some side effects of amlodipine may occur that usually do not need medical attention. These side effects may go away during treatment as your body adjusts to the medicine. Also, your health care professional may be able to tell you about ways to prevent or reduce some of these side effects.

Check with your health care professional if any of the following side effects continue or are bothersome or if you have any questions about them:

Less common

  • Acid or sour stomach
  • belching
  • feeling of warmth
  • heartburn
  • indigestion
  • lack or loss of strength
  • muscle cramps
  • redness of the face, neck, arms, and occasionally, upper chest
  • sleepiness or unusual drowsiness
  • stomach discomfort, upset, or pain

Those are the issues with just one of the many hypertension drugs, one of the most widely prescribed!

Conclusion

Blood pressure varies greatly, reflecting the human body's amazing self-regulation systems. In the vast majority of cases, blood pressure goes up with age. Lowering it by drugs does more harm than good. Except perhaps in extreme cases, high blood pressure does not cause disease. When pressure is extremely high, a search for the cause should be made. The ongoing focus on hypertension as a disease reflects nothing but the stubborn refusal of the medical establishment to admit that they were wrong, and of the pharma companies to give up a lucrative market.

Posted by David B. Black on 06/13/2022 at 10:48 AM | Permalink | Comments (0)

Flowcharts and Workflow in Software

The concept of workflow has been around in software from the beginning. It is the core of a great deal of what software does, including business process automation. Workflow is implicitly implemented in most bodies of software, usually in a hard-coded, ad-hoc way that makes it laborious and error-prone to implement, understand, modify and optimize. Expressing it instead as editable declarative metadata that is executed by a small body of generic, application-independent code yields a huge increase in productivity and responsiveness. It also enables painless integration of ML and AI. There are organizations that have done exactly this; they benefit from massive competitive advantage as a result.

Let’s start with some basics about flowcharts and workflow.

Flowcharts

Flowcharts pre-date computers. The concept is simple enough, as shown by this example from Wikipedia:

Fix lamp

The very earliest computer programs were designed using flowcharts, illustrated for example in a document written by John von Neumann in 1947. The symbols and methods became standardized. By the 1960’s software designers used templates like this from IBM

Flowchart

to produce clean flowcharts in standardized ways.

Flowcharts and Workflow

Flowcharts as a way to express workflows have been around for at least a century. Workflows are all about repeatable processes, for example in a manufacturing plant. People would systematize a process in terms of workflow in order to understand and analyze it. They would create variations to test to see if the process could be improved. The starting motivation would often be consistency and quality. Then it would often shift to process optimization – reducing the time and cost and improving the quality of the results. Some of the early work in Operations Research was done to optimize processes.

Workflow is a natural way to express and help understand nearly any repeatable process, from manufacturing products to taking and delivering orders in a restaurant. What else is a repeatable process? A computer program is by definition a repeatable process. Bingo! Writing the program may take considerable time and effort, just like designing and building a manufacturing plant. But once written, a computer program is a repeatable process. That’s why it made sense for the very earliest computer people like John von Neumann to create flowcharts to define the process they wanted the computer to perform repeatedly.

What’s in a Flowchart?

There are different representations, but the basic work steps are common sense:

  • Get data from somewhere (another program, storage, a user)
  • Do something to the data
  • Test the data, and branch to different steps depending on the results of the test
  • Put the data somewhere (another program, storage, a user)
  • Lots of these work steps are connected in a flow of control

This sounds like a regular computer software program, right? It is! When charted at the right level of detail, the translation from a flowchart to a body of code is largely mechanical. But humans perform this largely mechanical task, and get all wrapped up in the fine details of writing the code – just like pre-industrial craftsmen did.

Hey, that's not just a metaphor -- it is literally true! The vast, vast majority of software programming is done in a way that appears from the outside to be highly structured, but in fact is designing and crafting yet another fine wood/upholstery chair (each one unique!) or, for advanced programmers, goblets and plates made out of silver for rich customers.

Workflow

In the software world, workflow in general has been a subject of varying interest from the beginning. It can be applied to any level of detail. It has led to all sorts of names and even what amount to fashion trends. There is business process management. Business process engineering. And re-engineering. And business process automation. A specialized version of workflow is simulation software, which led early programmers to invent what came to be called "object-oriented programming." To see more about this on-going disaster that proved to be no better for simulating systems than it has been for software in general, see this.

When document image processing became practical in the 1980’s, the related term workflow emerged to describe the business process an organization took to process a document from its arrival through various departments and finally to resolution and archiving of the document. The company that popularized this kind of software, Filenet, was bought by IBM. I personally wrote the workflow software for a small vendor of document image processing software at that time.

Workflow in practice

There has been lots of noise about what amounts to workflow over the years, with books, movements and trends. A management professor in the 1980's talked about how business processes could be automated and improved using business process re-engineering. He said that each process should be re-thought from scratch -- otherwise you would just be "paving the cow paths," instead of creating an optimal process. As usual, lots of talk and little action. Here's the story of my personal involvement in such a project in which the people in charge insisted they were doing great things, while in fact they were spending lots of money helping the cows move a bit faster than they had been.

The Potential of Workflow

The potential of workflow can be understood in terms of maps and driving from one place to another. I've explained the general idea here.

Most software design starts with the equivalent of figuring out a map that shows where you are and where you want to get to. Then the craftsmanship begins. You end up with a hard-coded set of voluminous, low-level "directions" for driving two blocks, getting in the left lane, turning left, etc.

When the hard-coded directions fail to work well and the complaints are loud enough, the code is "enhanced," i.e., made even more complex, voluminous and hard to figure out by adding conditions and potential directional alternations.

Making the leap to an online, real time navigation system is way beyond the vast majority of software organizations. You know, one that takes account of changes, construction, feedback from other drivers on similar routes about congestion, whether your vehicle has a fee payment device installed, whether your vehicle is a truck, etc. Enhancements are regularly made to the metadata map and the ML/AI direction algorithms, which are independent of map details.

When software stays at the level of craftsmanship, you're looking at a nightmare of spaghetti code. Your cow paths aren't just paved -- they have foundations with top-grade rebar, concrete and curbs crafted of marble.

Conclusion

Metadata-driven workflow is the next step beyond schema enhancement for building automated systems to perform almost any job. It's a proven approach that many organizations have deployed -- literally for decades. But all the leaders of computing, including Computer Science departments at leading universities, remain obsessed with subjects that are irrelevant to the realities of building software that works; instead they stay focused on the wonders of craftsman-level low-level software languages. It's a self-contained universe where prestige is clearly defined and has nothing to do with the eternal truths of how optimal software is built.

 

Posted by David B. Black on 06/07/2022 at 09:39 AM | Permalink | Comments (0)

The Experts are Clear: Control your Blood Pressure

Most of us have heard about high blood pressure. It's one of those conditions that afflict a large number of people. Nearly half of American adults are said by the AHA to have it! You may be able to control it by maintaining a healthy lifestyle, things like avoid eating saturated fats, salt and alcohol, keeping your weight down and getting exercise. Fortunately, there are drugs that can help keep it under control.

Why should anyone care? Strokes! Heart attacks! Premature death!

Is this one of those things that floats in the air but isn't real? Let's take a look at what people who know what they're doing say about it.

The American Heart Association (AHA)

Blood pressure is all about the heart, right? So let's start with the medical association that's all about keeping our hearts healthy. They make it very clear why we should care:

Health threats diagram

Those folks at the AHA may be doctors who can't write legible prescriptions, but they were sure able to rope someone into producing a scary diagram! OK, you've got my attention. Here's the facts with blood pressure:

HBP

What can I do?? What if I maintain a good weight, eat a heart-healthy diet, cut back on salt and the rest and my BP is still scary? There are medications.

How long will you have to take your medication? Perhaps for the rest of your life.

OK, then. If that's what has to be done to avoid the things in the scary diagram above, then so be it.

More American Heart Association (AHA)

I decided to dig a bit deeper. When did they come to this conclusion?

Here is a chart from the AHA as it was in May 2010:

Screenshot 2022-04-15 150201

Compare this to the same chart on the same site in April 2022, shown earlier.

It appears some things have changed! Basically they've decided to crank up the alarm level on most of the numbers. You can observe the differences yourself; Stage 2 hypertension is a good example. In 2010 you had it if your numbers were more than 160/100, while now it's 140/90. In 2010, if your pressure was below 140, you didn't "have" hypertension -- just "prehypertension." Now, stage 1 hypertension starts at 130.

I did some research. The change happened in 2017. Here is the AHA's news release on the subject:

High blood pressure should be treated earlier with lifestyle changes and in some patients with medication – at 130/80 mm Hg rather than 140/90 – according to the first comprehensive new high blood pressure guidelines in more than a decade. The guidelines are being published by the American Heart Association (AHA) and the American College of Cardiology (ACC) for detection, prevention, management and treatment of high blood pressure.

The guidelines were presented today at the Association’s 2017 Scientific Sessions conference in Anaheim, the premier global cardiovascular science meeting for the exchange of the latest advances in cardiovascular science for researchers and clinicians.

Rather than 1 in 3 U.S. adults having high blood pressure (32 percent) with the previous definition, the new guidelines will result in nearly half of the U.S. adult population (46 percent) having high blood pressure, or hypertension.

A whole lot more people have high blood pressure! I sure hope they did their homework on this. Reading on we find:

The new guidelines were developed by the American Heart Association, American College of Cardiology and nine other health professional organizations. They were written by a panel of 21 scientists and health experts who reviewed more than 900 published studies. The guidelines underwent a careful systematic review and approval process.

OK, it looks like a whole team of experts was in on this one. 

Harvard Medical School

Better check with the people who train the best doctors. Let's make sure this is really up to date.

Harvard

Here's what they have to say:

Arteries that are tensed, constricted, or rigid offer more resistance. This shows up as higher blood pressure, and it makes the heart work harder. This extra work can weaken the heart muscle over time. It can damage other organs, like the kidneys and the eyes. And the relentless pounding of blood against the walls of arteries causes them to become hard and narrow, potentially setting the stage for a heart attack or stroke.

Most people with high blood pressure (known medically as hypertension) don't know they have it. Hypertension has no symptoms or warning signs. Yet it can be so dangerous to your health and well-being that it has earned the nickname "the silent killer." When high blood pressure is accompanied by high cholesterol and blood sugar levels, the damage to the arteries, kidneys, and heart accelerates exponentially.

Sounds scary. Can I do anything about it?

High blood pressure is preventable. Daily exercise, following a healthy diet, limiting your intake of alcohol and salt, reducing stress, and not smoking are keys to keeping blood pressure under control. When it creeps into the unhealthy range, lifestyle changes and medications can bring it down.

They agree. There are pills I can take.

Department of Health and Human Services (HHS)

Let's make sure the government is on board. After some looking it was very clear that HHS is in favor of keeping blood pressure under control. Finding out exactly what they think and what they're doing proved to be a bit of a challenge. Here's some of the things I learned our government is doing to help us:

  • They have published standards and require reports requiring health providers to specify the frequency of visits and other things they are performing with their patient population to control blood pressure.
  • They sponsored the Million Hearts Risk Check Challenge, asking developers to create a new consumer app that informs consumers of their general heart risk, motivates them to obtain a more accurate risk assessment by entering their blood pressure and cholesterol values, and directs them to nearby community pharmacies (and other locations) offering affordable and convenient blood pressure and cholesterol screenings.
  • The Surgeon General issued a Call for Action to Control Hypertension. It's a major document issued in 2020. Sadly, the link to the document was broken, so I wasn't able to read this important initiative. But here's a helpful diagram about it:

Hhs

The fact that the document was issued is impressive. The section introducing it has a stirring ending: "We must act to preserve the nation’s cardiovascular health now and into the future. Together, we’ve got this!"

Conclusion

Governments and the big authorities in the field are united in the effort to keep us all more healthy by encouraging us all to address the "silent killer" of hypertension. They want us to address it first of all by lifestyle changes, but if that fails, medication is available to keep things under control. Even if we have to take a couple pills a day for the rest of our lives, that's a small price to pay for having a longer, healthier life.
 
This is an issue that similar in many ways to the goal of maintaining a heart-healthy diet that minimizes saturated fat in meat and dairy products, and to combating LDL, the "bad" cholesterol in our blood; they all contribute in their own ways to keeping us healthy.
 
We should all have our blood pressure checked and do what we have to do to keep it under control. If, that is, we want to live a long, heart-healthy life. Naturally there are contrasting views on this seemingly settled topic, which I'll address later.
 

Posted by David B. Black on 05/31/2022 at 09:40 AM | Permalink | Comments (0)

Cartoons and Video games evolved into Bitcoin and NFT’s

Bitcoin and other cryptocurrencies are in the news. NFT’s (non-fungible tokens) have exploded onto the scene, with people spending large amounts of money to acquire unique rights to digital images. The explosion of invention and innovation is amazing, isn’t it?

Except that it's all just minor variations of things that were created decades ago, grew into huge markets with the participation of a good part of the world's population, and continue to grow today. Invention? Creativity? How about minor variations of proven ideas, giving them a new name and slightly different context, and getting super-rich?

From Drawing to Cartoons to Video Games

Drawing, sculpting and otherwise creating artificial images of the reality we experience has a long history.

For example, here’s a painting of a bovine from a cave created by early humans over 40,000 years ago:

Lubang_Jeriji_Saléh_cave_painting_of_Bull

Drawings that suggest reality but are purposely different from real things are called cartoons, and go back hundreds of years, becoming more widespread in the 1800’s in print media.

Then there was a breakthrough: animation. Leveraging early movie technology, artists worked enormously hard to create a fast-changing sequence of images to create the illusion of motion. Along with sound, you could now go to a theater and watch and hear a whole cartoon movie, filled with characters and actions that could never happen in real life. Characters like Mickey Mouse and Bugs Bunny became part of modern culture.

The next big step took place after computers were invented and got video screens. Of course the computers transformed the process of creating animation. But animation was always like watching a movie: the human could only watch and listen. With computers, the possibility first arose for actions of the person to directly and immediately change what happened on the screen. The video game was born.

The video game has gone through an extensive evolution from the primitive, simple Space War to immersive MMORPG's (massively multiplayer online role-playing games), enabling players to interact with each other in evolving shared animated worlds, often with fighting but also including other activities.

World of Warcraft (WoW) wasn't the first, but became the most popular of the MMORPG's.

Similar to other MMORPGs, the game allows players to create a characteravatar and explore an open game world in third- or first-person view, exploring the landscape, fighting various monsters, completing quests, and interacting with non-player characters (NPCs) or other players. The game encourages players to work together to complete quests, enter dungeons and engage in player versus player (PvP) combat, however the game can also be played solo without interacting with others. The game primarily focuses on character progression, in which players earn experience points to level up their character to make them more powerful and buy and sell items using in-game currency to acquire better equipment, among other game systems.

World of Warcraft was a major critical and commercial success upon its original release in 2004 and quickly became the most popular MMORPG of all time, reaching a peak of 12 million subscribers in 2010.[4] The game had over one hundred million registered accounts by 2014[5] and by 2017, had grossed over $9.23 billion in revenue, making it one of the highest-grossing video game franchises of all time. The game has been cited by gaming journalists as the greatest MMORPG of all time and one of the greatest video games of all time

The industries creating hardware and software for these artificial worlds has grown to be huge. In 2020 video gaming generated over $179 billion in global revenue, having surpassed the film industry years before.

Video games aren’t just for kids. There are an estimated 3.24 billion gamers across the globe.

In the US the numbers are huge. “Three out of every four, or 244 million, people in the U.S. play video games, an increase of 32 million people since 2018." Gamers spend lots of time on their games: “... gamers average 14 hours per week playing video games.”

Game World and Virtual Economies

Huge numbers of people go to a screen or put on a headset and "enter" the world of a video game, where they often spend hours at a time. While in that world, they can move from place to place as an observer, or as the controller of their personal avatar. They can interact with others, as shown by this scene from the virtual world of Second Life in 2003.

Second_Life_11th_Birthday_Live_Drax_Files_Radio_Hour

Long before Bitcoin was created, video games had virtual economies with digital currencies.

The currency used in a game world can be called different things. For example in World of Warcraft it's called -- big shock coming up here -- Gold. Gold can be earned by players accomplishing things in the game world, and can be spent for skills or in-game objects. Players can buy and sell items among themselves using such currencies. Many games enable players to buy in-game currencies using real money. In some cases, in-game virtual "land" is also for sale.

Long before Bitcoin, markets arose to enable in-game currencies to be traded (exchanged) for real-world currencies. It is now a multi-billion dollar industry. "In 2001, EverQuest players Brock Pierce and Alan Debonneville founded Internet Gaming Entertainment Ltd (IGE), a company that offered not only the virtual commodities in exchange for real money but also provided professional customer service." The company was the largest such on-line exchange and accounted for hundreds of millions of dollars of transactions.

Video Games, Bitcoin and NFT's

The first Bitcoin was sent in 2009. It wasn't much used or valued until 2013. Ethereum first went live in 2014. By this time there were already MMORPG's with many hundreds of millions of players earning, spending and exchanging digital currencies involving virtual objects in their game worlds.

Let's see how the things used by literally billions of gamers compares to Bitcoin (and other crypto-currencies) and NFT's.

  • Games have digital currencies with no real-world value.
    • Sounds like Bitcoin and other crypto-currencies
  • In-game virtual objects can be bought and sold using in-game currencies
    • Sounds like buying crypto-world NFT's with Bitcoin
  • New units of the digital currency are created by the game software
    • New crypto is created by Bitcoin mining software
  • Game currencies can be used and exchanged among gamers
    • Same with Bitcoin
  • Game currencies can be exchanged for and bought with real-world money
    • Same with Bitcoin
  • There are exchanges outside the game that enable buying/selling
    • Same with Bitcoin
  • The exchange price can vary greatly
    • Same with Bitcoin
  • Teams create new games with currencies and virtual objects
    • Teams create new crypto-currencies and NFT's

Still think there's no relationship between gaming and crypto? How about, as mentioned above, the fact that Brock Pierce and a partner founded the game currency exchange IGE in 2001, and the same Mr. Pierce was active in crypto-currency by 2013 and became a "Bitcoin billionaire" by 2018.

Of course, the new worlds of crypto and NFT's are different in some important ways from the gaming worlds. Games along with the objects and currencies are created and managed by the game company. While there's more control than is generally recognized, crypto-currencies have a large degree of self-management with their built-in miners. Similarly, NFT's are created independently

Conclusion

First Bitcoin came seemingly out of nowhere in 2009. A few years later, variations of Bitcoin appeared on the market. An astounding explosion of crypto followed, along with digital objects that "live" in the crypto world.

Like many other "brand new" things, the worlds of crypto and NFT's have remarkably close relations to the world of gaming, from which they appear to have evolved. Compared to the gaming world, the number of people invested in crypto is truly tiny, hundredths of a percent. But the inflation and amount of real-world currency that has been converted to crypto dwarfs the amounts in the gaming world.

As with many other tech trends, the history and evolution of the elements of the trend reward study.

Note: this was originally published on Forbes.

Posted by David B. Black on 05/28/2022 at 10:44 AM | Permalink | Comments (0)

How to Improve Software Productivity and Quality: Schema Enhancements

Most efforts to improve programmer productivity and software quality fail to generate lasting gains. New languages, new project management and the rest are decades-long disappointments – not that anyone admits failure, of course.

The general approach of software abstraction, i.e., moving program definition from imperative code to declarative metadata, has decades of success to prove its viability. It’s a peculiar fact of software history and Computer Science that the approach is not mainstream. So much the more competitive advantage for hungry teams that want to fight the entrenched software armies and win!

The first step – and it’s a big one! – on the journey to building better software more quickly is to migrate application functionality from lines of code to attributes in central schema (data) definitions.

Data Definitions and Schemas

Every software language has two kinds of statements: statements that define and name data and statements that do things that are related to getting, processing and storing data. Definitions are like a map of what exists. Action statements are like sets of directions for going between places on a map. The map/directions metaphor is key here.

In practice, programmers tend to first create the data definitions and then proceed to spend the vast majority of their time and effort creating and evolving the action statements. If you look at most programs, the vast majority of the lines are “action” lines.

The action lines are endlessly complex, needing books to describe all the kinds of statements, the grammar, the available libraries and frameworks, etc. The data definitions are extremely simple. They first and foremost name a piece of data, and then (usually) give its type, which is one of a small selection of things like integer, character, and floating point (a number that has decimal digits). There are often some grouping and array options that allow you to put data items into a block (like address with street, town and state) and sets (like an array for days in a year).

One of the peculiar elements of software language evolution is whether the data used in a program is defined in a single place or multiple places. You would think – correctly! – that the sensible choice is a single definition. That was the case for the early batch-oriented languages like COBOL, which has a shared copybook library of data definitions. A single definition was a key aspect of the 4-GL languages that fueled their high productivity.

Then the DBMS grew as a standard part of the software toolkit; each DBMS has its own set of data definitions, called a “schema.” Schemas enable each piece of data to have a name, a data type and be part of a grouping (table). That’s pretty much it! Then software began to be developed in layers, like UI, server and database, each with its own data/schema definitions and language. Next came services and distributed applications, each with its own data definitions and often written in different languages. Each of these things need to “talk” with each other, passing and getting back data, with further definitions for the interfaces.

The result of all this was an explosion of data definitions, with what amounts to the same data being defined multiple times in multiple languages and locations in a program.

In terms of maps and directions, this is very much like having many different collections of directions, each of which has exactly and only the parts of the map those directions traverse. Insane!

The BIG First Step towards Productivity and Quality

The first big step towards sanity, with the nice side effect of productivity and quality, is to centralize all of a program’s data definitions in a single place. Eliminate the redundancy!

Yes, it may take a bit of work. The central schema would be stored in a multi-part file in a standardized format, with selectors and generators for each program that shared the schema. Each sub-program (like a UI or service) would generally only use some of the program’s data, and would name the part it used in a header. A translator/generator would then grab the relevant subset of definitions and generate them in the format required for the language of the program – generally not a hard task, and one that in the future should be provided as a widely-available toolset.

Why bother? Make your change in ONE place, and with no further work it’s deployed in ALL relevant places. Quality (no errors, no missing a place to change) and productivity (less work). You just have to bend your head around the "radical" thought that data can be defined outside of a program.

If you're scratching your head and thinking that this approach doesn't fit into the object-oriented paradigm in which data definitions are an integral part of the code that works with them, i.e. a Class, you're right. Only by breaking this death-grip can we eliminate the horrible cancer of redundant data definitions that make bodies of O-O code so hard to write and change. That is the single biggest reason why O-O is bad -- but there are more!

The BIG Next Step towards Productivity and Quality

Depending on your situation, this can be your first step.

Data definitions, as you may know, are pretty sparse. There is a huge amount of information we know about data that we normally express in various languages, often in many places. When we put a field on a screen, we may:

  • Set permissions to make it not visible, read-only or editable.
  • If the field can be entered, it may be required or optional
  • Display a label for the field
  • Control the size and format of the field to handle things like selecting from a list of choices or entering a date
  • Check the input to make sure it’s valid, and display an error message if it isn’t
  • Fields may be grouped for display and be given a label, like an address

Here's the core move: each one of the above bullet items -- and more! -- should be defined as attributes of the data/schema definition. In other words, these things shouldn't be arguments of functions or otherwise part of procedural code. They should be just like the Type attribute of a data definition is, an attribute of the data definition.

This is just in the UI layer. Why not take what’s defined there and apply it as required at the server and database layers – surely you want the same error checking there as well, right?

Another GIANT step forward

Now we get to some fun stuff. You know all that rhetoric about “inheritance” you hear about in the object-oriented world? The stuff that sounds good but never much pans out? In schemas and data definitions, inheritance is simple and … it’s effective! It’s been implemented for a long time in the DBMS concept of domains, but it makes sense to greatly extend it and make it multi-level and multi-parent.

You’ve gone to the trouble of defining the multi-field group of address. There may be variations that have lots in common, like billing and shipping address. Why define each kind of address from scratch? Why not define the common parts once and then say what’s unique about shipping and billing?

Once you’re in the world of inheritance, you start getting some killer quality and productivity. Suppose it’s decades ago and the USPS has decided to add another 4 digits to the zip code. Bummer. If you’re in the enhanced schema world, you just go into the master definition, make the change, and voila! Every use of zip code is now updated.

Schema updating with databases

Every step you take down the road of centralized schema takes some work but delivers serious benefits. So let’s turn to database schema updates.

Everyone who works with a database knows that updating the database schema is a process. Generally you try to make updates backwards compatible. It’s nearly always the case that the database schema change has to be applied to the test version of the database first. Then you update the programs that depend on the new or changed schema elements and test with the database. When it’s OK, you do the same to the production system, updating the production database first before releasing the code that uses it.

Having a centralized schema that encompasses all programs and databases doesn’t change this, but makes it easier – fewer steps with fewer mistakes. First you make the change in the centralized schema. Then it’s a process of generating the data definitions first for the test systems (database and programs) and then to the production system. You may have made just a couple changes to the centralized schema, but because of inheritance and all the data definitions that are generated, you might end up with dozens of changes in your overall system – UI pages, back end services, API calls and definitions and the database schema. Making an omission or mistake on just one of the dozens of changes means a bug that has to be found and fixed.

Conclusion

I’ve only scratched the surface of a huge subject in this post. But in practice, it’s a hill you can climb. Each step yields benefits, and successive steps deliver increasingly large results in terms of productivity and quality. The overall picture should be clear: you are taking a wide variety of data definitions expressed in code in different languages and parts of a system and step by step, collapsing them into a small number of declarative, meta-data attributes of a centralized schema. A simple generator (compile-time or run-time) can turn the centralized information into what’s needed to make the system work.

In doing this, you have removed a great deal of redundancy from your system. You’ve made it easier to change. While rarely looked on as a key thing to strive for, the fact that the vast majority of what we do to software is change it makes non-redundancy the most important measure of goodness that software can have.

What I've described here are just the first steps up the mountain. Near the mountain's top, most of a program's functionality is defined by metadata!

FWIW, the concept I'm explaining here is an OLD one. It's been around and been implemented to varying extents in many successful production systems. It's the core of climbing the tree of abstraction. When and to the extent it's been implemented, the productivity and quality gains have in fact been achieved. Ever hear of the RAILS framework in Ruby, implementing the DRY (Don't Repeat Yourself) concept? A limited version of the same idea. Apple's credit card runs on a system built on these principles today. This approach is practical and proven. But it's orthogonal to the general thoughts about software that are generally taught in Computer Science and practiced in mainstream organizations.

This means that it's a super-power that software ninjas can use to program circles around the lumbering armies of mainstream software development organizations.

Posted by David B. Black on 05/23/2022 at 03:15 PM | Permalink | Comments (0)

The Goals of Software Architecture

What goals should software architecture strive to meet? You would think that this subject would have been intensely debated in industry and academia and the issue resolved decades ago. Sadly, such is not the case. Not only can't we build good software that works in a timely and cost-effective way, we don't even have agreement or even discussion about the goals for software architecture!

Given the on-going nightmare of software building and the crisis in software that still going strong after more than 50 years, you would think that solving the issue would be top-of-mind. As far as I can tell, not only is it not top-of-mind, it’s not even bottom-of-mind. Arguably, it’s out-of-mind.

What is Software Architecture?

A software architecture comprises the tools, languages, libraries, frameworks and overall design approach to building a body of software. While the mainstream approach is that the best architecture depends on the functional requirements of the software, wouldn’t it be nice if there were a set of architectural goals that were largely independent of the requirements for the software? Certainly such an independence would be desirable, because it would shorten and de-risk the path to success. Read on and judge for yourself whether there is a set of goals that the vast majority of software efforts could reasonably share.

The Goals

Here’s a crack at common-sense goals that all software architectures should strive to achieve and/or enable. The earlier items on the list should be very familiar. The later items may not be goals of every software effort; the greater in scope the software effort, the more their importance is likely to increase.

  • Fast to build
    • This is nearly universal. Given a choice, who wants to spend more time and money getting a software job done?
  • View and test as you build
    • Do you want to be surprised at the end by functionality that isn't right or deep flaws that would have been easy to fix during the process?
  • Easy to change course while building
    • No set of initial requirements is perfect. Things change, and you learn as you see early results. There should be near-zero cost of making changes as you go.
  • Minimal effort for fully automated regression testing
    • What you've built should work. When you add and change, you shouldn't break what you've already built. There should be near-zero cost for comprehensive, on-going regression testing.
  • Seconds to deploy and re-deploy
    • Whether your software is in progress or "done," deploying a new version should be near-immediate.
  • Gradual, controlled roll-out
    • When you "release" your software, who exactly sees the new version? It is usually important to control who sees new versions when.
  • Minimal translation required from requirements to implementation
    • The shortest path with the least translation from what is wanted to the details of building it yields speed, accuracy and mis-translations.
  • Likelihood of slowness, crashes or downtime near zero
    • 'Nuff said.
  • Easily deployed to all functions in an organization
    • Everything that is common among functions and departments is shared
    • Only differences between functions and departments needs to be built
  • Minimal effort to support varying interfaces and roles
    • Incorporate different languages, interfaces, modes of interaction and user roles into every aspect of the system’s operation in a central way
  • Easily increase sophisticated work handling
    • Seamless incorporation of history, evolving personalization, segmentation and contextualization in all functions and each stage of every workflow
  • Easily incorporate sophisticated analytics
    • Seamless ability to integrate on and off-line Analytics, ML, and AI into workflows
  • Changes the same as building
    • Since software spends most of its life being changed, all of the above for changes

Let’s have a show of hands. Anyone who thinks these are bad or irrelevant goals for software, please raise your hand. Anyone?

I'm well aware that the later goals may not be among the early deliverables of a given project. However, it's important to acknowledge such goals and their rising importance over time so that the methods to achieve earlier goals don't increase the difficulty of meeting the later ones.

Typical Responses to the Goals

I have asked scores of top software people and managers about one or more of these goals. I detail the range of typical responses to a couple of them in my book on Software Quality.

After the blank stare, the response I've most often gotten is a strong statement about the software architecture and/or project management methods they support. These include:

  • We strictly adhere to Object-oriented principles and use language X that minimizes programmer errors
  • We practice TDD (test-driven development)
  • We practice X, Y or Z variant of Agile with squads for speed
  • We have a micro-services architecture with enterprise queuing and strictly enforced contracts between services
  • Our quality team is building a comprehensive set of regression tests and a rich sandbox environment.
  • We practice continuous release and deployment. We practice dev ops.
  • We have a data science team that is testing advanced methods for our application

I never get any discussion of the goals or their inter-relationships. Just a leap to the answer. I also rarely get "this is what I used to think/do, but experience has led me to that." I don't hear concerns or limitations of the strongly asserted approaches. After all, the people I ask are experts!

What's wrong with these responses?

In each case, the expert asserts that his/her selection of architectural element is the best way to meet the relevant goals. The results rarely stand out from the crowd for the typical answers listed above.

The key thing that's wrong is the complete lack of principles and demonstration that the approaches actually come closer to meeting the goals than anything else.

The Appropriate Response to the Goals

First and foremost, how about concentrating on the goals themselves! Are they the right goals? Do any of them work against the others?

That's a major first step. No one is likely to get excited, though. Most people think goals like the ones listed above don't merit discussion. They're just common sense, after all.

Things start to get contentious when you ask for ways to measure progress towards each goal. If you're going to the North Pole or climbing Mt. Everest, shouldn't you know where it is, how far away you are, and whether your efforts are bringing you closer?

Are the goals equally important? Is their relative importance constant, or does the importance change?

Wouldn't it be wonderful if someone, somewhere took on the job of evaluating existing practices and ... wait for it ... measured the extent they achieved the goals. Yes, you might not know what "perfect" is, but surely relative achievement can be measured.

For example, people are endlessly inventing new software languages and making strong claims about their virtues. Suppose similar claims were made about new bats in baseball. Do you think it might be possible that the batter's skill makes more of a difference than the bat? Wouldn't it be important to know? Apparently, this is one of the many highly important -- indeed, essential -- questions in software that never gets asked, let alone answered.

Along the same lines, wouldn't it be wonderful if someone took on the job of examining outliers? Projects that worked out not just in the typical dismal way, but failed spectacularly? On the other end of the spectrum, wouldn't amazing fast jobs be interesting? This would be done on start-from-scratch projects, but equally important on making changes to existing software.

A whole slew of PhD's should be given out for pioneering work on identifying and refining the exact methods that make progress towards the goals. It's likely that minor changes to the methods used to meet the earlier goals well would make a huge difference in meeting later goals such as seamlessly incorporating the results of analytics.

Strong Candidates for Optimal Architecture

After decades of programming and then more of examining software in the field, I have a list of candidates for optimal architecture. My list isn't secret -- it's in books and all over this blog. Here's a couple places to start:

Speed-optimized software

Occamality

Champion Challenger QA

Microservices

The Dimensions

Abstraction progression

The Secrets

The books

Conclusion

I've seen software fashions change over the years, with things getting hot, fading away, and sometimes coming back with a new name. The fashions get hot, and all tech leaders who want to be seen as modern embrace them. No real analysis. No examination of the principles involved. Just claims. At the same time, degrees are handed out by universities in Computer Science by Professors who are largely unscientific. In some ways they'd be better off in Art History -- except they rarely have taste and don't like studying history either.

I look forward to the day when someone writes what I hope will be an amusing history of the evolution of Computer Pseudo-Science.

Posted by David B. Black on 05/09/2022 at 05:03 PM | Permalink | Comments (0)

Making Fun of Object-Orientation in Software Languages

When a thing is held in exaltation by much of the world and its major institutions, when that thing is sure that it's the best thing ever, and when people who support the thing are convinced that they're superior to the rest of us, who are nothing but unsophisticated hackers, then you've got something that's fun to make fun of. A target-rich environment.

There are lots of things to make fun of in software. There are project managers who solemnly pronounce that, due to their expertise, the project is on track and will be delivered on time and to spec. There are the software architects who haven't met a real user or written a line of production code in years, who proudly announce their design of a project to migrate to a graph database or micro-services. There are other juicy targets. But none comes close to the exalted ridiculousness of object-oriented languages (the purer the better) and those who shill for them.

Are you a member of the O-O cult, offended by this talk of making fun of supposed imperfections of the one true approach to programming languages? My sincere sympathies to you. Check this out. It's heavy-duty cult de-programming material. It probably won't work for you, but give it a try.

Back to the fun. Here's a start from an excellent essay by someone who tried for years to make OOP work.

OOP

Here are some wonderful highlights from a collection made by a person who supports OOP but thinks most programmers don't know how to program it well.

Edsger W. Dijkstra (1989)
“TUG LINES,” Issue 32, August 1989
“Object oriented programs are offered as alternatives to correct ones” and “Object-oriented programming is an exceptionally bad idea which could only have originated in California.”

Paul Graham (2003)
The Hundred-Year Language
“Object-oriented programming offers a sustainable way to write spaghetti code.”

Here are highlights from a wonderfully rich collection.

“object-oriented design is the roman numerals of computing.” – Rob Pike

“The phrase "object-oriented” means a lot of things. Half are obvious, and the other half are mistakes.“ – Paul Graham

“The problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.” – Joe Armstrong

“I used to be enamored of object-oriented programming. I’m now finding myself leaning toward believing that it is a plot designed to destroy joy.” – Eric Allman

OO is the “structured programming” snake oil of the 90's. Useful at times, but hardly the “end all” programing paradigm some like to make out of it.

From another section of the same wonderfully rich collection.

Being really good at C++ is like being really good at using rocks to sharpen sticks. – Thant Tessman

Arguing that Java is better than C++ is like arguing that grasshoppers taste better than tree bark. – Thant Tessman

There are only two things wrong with C++: The initial concept and the implementation. – Bertrand Meyer

More from a good extended essay on OOP:

“C makes it easy to shoot yourself in the foot; C++ makes it harder, but when you do, it blows away your whole leg.”

It was Bjarne Stroustrup who said that, so that’s ok, I guess.

“Actually I made up the term ‘object-oriented’, and I can tell you I did not have C++ in mind.” — Alan Kay

“There are only two things wrong with C++: The initial concept and the implementation.” — Bertrand Meyer

“Within C++, there is a much smaller and cleaner language struggling to get out.” — Bjarne Stroustrup

“C++ is history repeated as tragedy. Java is history repeated as farce.” — Scott McKay

“Java, the best argument for Smalltalk since C++.” — Frank Winkler

“If Java had true garbage collection, most programs would delete themselves upon execution.” — Robert Sewell

Object-oriented design and programming remains a useful way to think about parts of some software problems, as I've described here. As a universal approach to software, it's beyond bad. Beyond ludicrous. It is such a joke that the only thing to do is visit it briefly, make jokes, and then move on, with a tinge of regret about the science-less-ness of Computer Science.

 

Posted by David B. Black on 05/03/2022 at 09:14 AM | Permalink | Comments (0)

The forbidden question: What caused the obesity epidemic?

There is an obesity problem. Everyone knows it. Public health authorities proclaim it. Over half the population in the US is now obese. The consequences of being obese in terms of health are serious. Solutions are proposed, but they don’t seem to work. The question that’s almost never asked, the answer to which would help us understand AND FIX the problem, is pure common sense: what started the epidemic? What changed to cause the steady rise of overweight and obese people?

The reason no one wants to ask the question is because the most probable answer is something our major institutions, Experts and Authorities don’t want us to know: Their nutrition recommendations, widely promoted and visible on most food labels that you buy, are based on bad, corrupted science.

We now know how and why the science was wrong. After much study and careful trials, we know what's right. But because of the refusal of the authorities to admit and correct their error, millions of people continue to suffer and die of diseases they would not have if our Medical Health Elites would suck it up, admit error, and fix it.

There is an epidemic of obesity

The epidemic. Well known, accepted. Here's the FDA:

111

Here's the CDC:

Obesity is a serious chronic disease, and the prevalence of obesity continues to increase in the United States. Obesity is common, serious, and costly. This epidemic is putting a strain on American families, affecting overall health, health care costs, productivity, and military readiness.

Is it under control? The CDC again:

From 1999 –2000 through 2017 –2018, US obesity prevalence increased from 30.5% to 42.4%. During the same time, the prevalence of severe obesity increased from 4.7% to 9.2%.

What's bad about being obese?

According to the CDC, here are some of the consequences of being obese:

11

What are the medical costs resulting from obesity?

A highly detailed study was published in 2021 going into depth to determine the direct medical costs of obesity.

RESULTS: Adults with obesity in the United States compared with those with normal weight experienced higher annual medical care costs by $2,505 or 100%, with costs increasing significantly with class of obesity, from 68.4% for class 1 to 233.6% for class 3. The effects of obesity raised costs in every category of care: inpatient, outpatient, and prescription drugs. ...  In 2016, the aggregate medical cost due to obesity among adults in the United States was $260.6 billion.

In other words, obese people have more than double the medical care costs compared to those who are not obese. More important, the obese people themselves suffer the poor health resulting from their condition!

How are we told to prevent and/or fix it?

From the CDC:

Obesity is a complex health issue resulting from a combination of causes and individual factors such as behavior and genetics.

Healthy behaviors include regular physical activity and healthy eating. Balancing the number of calories consumed from foods and beverages with the number of calories the body uses for activity plays a role in preventing excess weight gain.

A healthy diet pattern follows the Dietary Guidelines for Americans which emphasizes eating whole grains, fruits, vegetables, lean protein, low-fat and fat-free dairy products, and drinking water.

In other words, exercise more, eat less, and follow the official diet guidelines which emphasize avoiding fat in meat and dairy.

The origins of the obesity epidemic

Did the obesity epidemic appear out of nowhere, for no reason? Nope. The key to understanding and responding to any epidemic is to trace its origins to the time and place of its start. Only then can you understand the problem and often get good ideas about how to mitigate the epidemic and prevent similar ones from getting started.

Look at this chart from the CDC:

Estat-adults-fig

A sharp upwards turn in Obesity and Severe Obesity took place in 1976-1980 and has continued rising. From the tables in the document from which this chart was taken, Obesity was about 14% and has risen to 43%, while Severe Obesity was about 1% and has risen to over 9%. That's about 3X and 9X increases. Total Obesity is now over 50% of the population! During this period the Overweight share has remained about the same (about 32%), which means that a large number of people "graduated" to higher levels of weight, probably many normals becoming Overweight while as many Overweights became Obese.

What happened when the "hockey stick" upwards trend in obesity started? It turns out something big happened, with lots of public attention. According to the government website on the history of nutritional guidelines:

A turning point for nutrition guidance in the U.S. began in the 1970s with the Senate Select Committee on Nutrition and Human Needs....

In 1977, after years of discussion, scientific review, and debate, the U.S. Senate Select Committee on Nutrition and Human Needs, led by Senator George McGovern, released Dietary Goals for the United States. ...

The recommendations included:

Increase the consumption of complex carbohydrates and “naturally occurring” sugars...

Reduce overall fat consumption...

Reduce saturated fat consumption to account for about 10 percent of total energy intake...

The widely publicized recommendations were followed up by the first in the series of official expert-approved documents:

In February 1980, USDA and HHS collaboratively issued Nutrition and Your Health: Dietary Guidelines for Americans,

It's important to note that the focus was NOT on obesity. It was on diet-related health, with a particular focus on heart disease. The consensus of Expert opinion at the time was that eating saturated fat causes heart disease. In an effort to reduce heart disease, the authorities started the drum-beat of "Stay healthy! Eat less saturated Fat!"

Obesity took off when we obeyed the Experts

The new dietary advice was shouted from the hill tops. It was pushed by government agencies. It was endorsed by every major health institution, and pushed by nutritionists and doctors everywhere. It was emblazoned on food packaging by law, each package stating how much of the evil, heart-killing saturated fat was in each serving, and how much of your "daily allowance" it used up.

The food that was offered in grocery stores and restaurants changed to reflect the "scientific" consensus. Bacon was bad. If you had to eat meat (even though you shouldn't), you should eat lean (no fat) meat. All these things are still what we see!

Here is a study based on the US National Health and Nutrition Examination Survey (NHANES) that demonstrates the strong linkage between the diet recommendations and the growth of obesity.

From a valuable study on obesity (behind a paywall):

When we put together the following…

1) Obesity is not a simplistic imbalance of energy in and energy out, but a far more complex matter of how, biochemically, the body can store or utilize fat. Carbohydrate is the best macronutrient to facilitate fat storage and prevent fat utilization.

2) Fat/protein calories have jobs to do within the body – they can be used for basal metabolic repair and maintenance. Carbohydrate is for energy alone; it needs to be burned as fuel or it will be stored as fat.

… carbohydrates can be seen as uniquely suited to weight gain and uniquely unsuited to weight loss. The macronutrient that we have been advising people to eat more of is the very macronutrient that enables fat to be stored and disables fat from being utilized.

Increasingly people ate what they were told to eat. Young people grew up eating in the new style, with vastly more packaged foods, sugar and carbohydrates than earlier generations. No surprise, they got fat early in life, and stayed fat.

Marty Makary MD, surgeon and Professor at Johns Hopkins, Makary
treats this from a different angle in his recent book.

Dr. Dariush Mozaffarian, dean of Tufts University’s Friedman School of Nutrition—the nation’s leading nutrition school ... recently wrote in the Journal of the American Medical Association, “We really need to sing it from the rooftops that the low-fat diet concept is dead, there are no health benefits to it.” As a gastrointestinal surgeon and advocate for healthful foods, I’m well aware how this low-fat teaching is based on the medical establishment’s embarrassing, outdated theory that saturated fat causes heart disease. A landmark 2016 article in the Journal of the American Medical Association found that the true science was actually being suppressed by the food industry. Highly respected medical experts like my former Johns Hopkins colleague Dr. Peter Attia are now correcting the medical establishment’s sloppy teachings. He and many other lipidologists know that the low-fat bandwagon has damaged public health. It was driven by an unscientific agenda advanced by the American Heart Association and the food industry, which sponsored the misleading food pyramid. These establishment forces spent decades promoting addictive, high-carbohydrate processed foods because the low-fat foods they endorsed require more carbohydrates to retain flavor. That 40-year trend perfectly parallels our obesity epidemic. Medical leaders like Dr. Attia have been trying to turn this aircraft carrier around, but it’s been a challenge. Despite the science, the dogma remains pervasive. In hospitals today, the first thing we do to patients when they come out of surgery, exhausted and bleary-eyed, is to hand them a can of high-sugar soda. Menus given to hospitalized patients promote low-fat options with a heart next to those menu items. And when physicians order food for patients in electronic health records, there’s a checkbox for us to order the “cardiac diet,” which hospitals define as a low-fat diet. Despite science showing that natural fats pose no increased risk of heart disease and that excess sugar is the real dietary threat to health, my hospital still hands every patient a pamphlet recommending the “low-fat diet” when they’re discharged from the cardiac surgery unit, just as we have been doing for nearly a half century. But nowhere is that now debunked low-fat recommendation propagated as much as in wellness programs.

For more study

The experts are clear on this subject. You already know this, but here are highlights of their views on fat and on cholesterol. Here is background on how saturated and cholesterol became menaces. Here is why you should eat lots of saturated fat and why should not take drugs to lower your cholesterol.

With the billion-dollar-revenue American Heart Association continuing to villanize saturated fat, this insanity is unlikely to stop soon.

Conclusion

The cause of the obesity epidemic is clear. No one talks about it because the people in charge refuse to admit their role in causing it. As the evidence from RCT's continues to pile up, careful reading shows that the emphatic language about saturated fat has lightened up a bit, but we're not even close to the equivalent of acknowledging, for example, that smoking cigarettes is bad for you. We should be shouting "eat lots of natural saturated fat, the kind in meat, milk, cheese and eggs." We're not there yet. Educated people can nonetheless make their own decisions and do just that -- and improve their health as a result.

 

Posted by David B. Black on 04/25/2022 at 05:12 PM | Permalink | Comments (0)

How to Fix Software Development and Security: A Brief History

As the use of computers grew rapidly in the 1960’s, the difficulty of creating quality software that met customer needs became increasingly evident. Wave after wave of methods were created, many of them becoming standard practice – without solving the problem! This is a high-level survey of the failed attempts to solve the problem of software development and security -- all of which are now standard practice in spite of failing to fix the problem! There are now software auditing firms that will carefully examine a software organization to see in which ways it deviates from ineffective standard practice – so that it can be “corrected!”

Houston, We've Had a problem

The astronauts of the Apollo 13 mission to the Moon radioed base control about the problem that threatened the mission and their lives.

Apollo_13_fd4_capcom_lousma_with_slayton_mattingly_brand_and_young_apr_14_1970_s70-34902

The astronauts were saved after much effort and nail-biting time.

Should the people in charge of software make a similar call to mission control? Yes! They have made those calls, and continue to make them, thousands of times per day!!

Getting computers to do what you want by creating the kind of data we call software was a huge advance over the original physical method of plugs and switches. But it was still wildly difficult. Giant advances were made in the 1950’s that largely eliminated the original difficulty.

As the years went on into the 1960’s, the time, cost and trouble of creating and modifying software became hard to ignore. The problems got worse, and bad quality surfaced as a persistent issue. There were conferences of experts and widely read books, including one from a leader of the IBM 360 Operating System project, one of the largest efforts of its kind.

Mythical_man-month_(book_cover)

The Apollo 13 astronauts were saved, but the disaster in software development has resisted all the treatments that have been devised for it. With the invention and spread of the internet, we are now plagued with cybercrime, to the extent that ransomware attacks now take place (as of June 2021) ... get ready for it ... 149,000 times per week!! I think a few more exclamation points would have been appropriate for that astounding statistic, but I'm a laid-back kind of guy, so I figured I'd stick with a mild-mannered two. Here's my description from more than four years ago on the state of ransomware and the response of the "experts."

Following are highlights of the methods that have been devised to solve the problem of software development and security. None of which have worked, but have nonetheless become standard practice.

Programming Languages

The assembled experts in 1968 decided that "structured programming" would solve the problem of writing software that worked. Here's how that turned out. A group of experts recently convened to survey 50 years of progress in programming languages. They were proud of themselves, but had actually made things worse. Highlights here. One of those early efforts led to object-oriented languages, which are now faith-based dogma in the evidence-free world of Computer Science. New languages continue to be invented that claim to reduce programmer errors and increase productivity. What always happens is that the "advances" are widely publicized while the large-scale failures are concealed; here are a couple of juicy examples. Above all, the flow of new languages provides clear demonstration that programmers don't have enough useful work to do to keep them busy.

Outsourcing

While programmers babbled among themselves about this language or that method, company managers couldn't help but noticing that IT budgets continued to explode while the results were an avalanche of failures. Service companies emerged that promised better results than company management could achieve because they claimed to be experts in software and its management.

One of the pioneers in computer outsourcing was Ross Perot's Electronic Data Systems (EDS), which had $100 million in revenue by 1975. By the mid-1980's it had over 40,000 employees and over $4 billion in revenue. It continued to grow rapidly. along with a growing number of competitors including services branches of the major computer vendors. In a typical deal, a company would turn over some or all of its hardware and software operations to the outsourcer, who would add layers of management and process. They succeeded by lowering expectations and hiding the failures.

Off-shoring

As communications and travel grew more cost effective, basing outsourced operations in another country became practical. Off-shoring had the huge advantage that while the results were no better than regular outsourcing, the fact that employees were paid much less than US wages enabled marginally lower prices and a huge infrastructure of management and reporting which further disguised the as-usual results.

US-based outsourcers started doing this fairly early, while non-US vendors providing software services grew rapidly. Tata Computer Services first established a center in the 1980's and now has revenue over $20 billion and has been the world's most valuable IT company, employing over 500,000 people. Infosys is another India-based giant, along with hundreds of others.

Project Management

Deeper investment in the growing and evolving collection of project management methods has been a key part of software organizations. The larger the organization, the greater the commitment to widely accepted, mainstream project management methods. As new methods enter the mainstream, for example Agile, new managers and outsourcing organizations sell their embracing of the methods as a reason for hiring and contracting managers to feel confident in the decisions they are making. Project management has definitely succeeded in forming a thick layer of obfuscation over the reality of software development, quality and security. The only method of project management that has ever delivered "good" results for software is the classic method of wild over-estimation. See this for one of the conceptual flaws at the heart of software project management, see this for an overview and this for a comprehensive look.

Education and Certification

Nearly all professions have education and certification requirements, from plumbers and electricians to doctors. Software organizations grew to embrace imposing education and certification requirements on their people as a sure method to achieve better outcomes. This came to apply not to just the developers themselves but also to the QA people, security people and project managers. Unlike those other professions, computer education and certification has been distinctly divorced from results.

Standards, Compliance and Auditing

Standards were developed early on to assure that all implementations of a given software language were the same. As software problems grew, standards were increasingly created to solve the problem. The standards grew to include fine-grained detail of the development process itself and even standards that specified the "maturity level" of the organization.

Just as a new house is inspected to assure that it complies with the relevant building code, including specialists to inspect the plumbing and electricity, the software industry embraced a discipline of auditing and compliance certification to assure that the organization was meeting the relevant standards. A specialized set of standards grew for various aspects of computer security, with different measures applied to financial transactions and healthcare records for example. The standards and compliance audits have succeeded in consuming huge amounts of time and money while having no positive impact on software quality and security.

Suppress, deflect and Ignore

Everything I've described above continues apace, intent on its mission of making software development successful and computer security effective. People who write code spend much of their time engaged in activities that are supposed to assure that the code they write meets the needs in a timely manner and has no errors, using languages that are supposed to help them avoid error, surrounded by tests created by certified people using certified methods to assure that it is correct. Meanwhile highly educated certified computer security specialists assure that security is designed into the code and that it will pass all required audits when released.

How is this working out? In spite of widespread information blackouts imposed on failures, enough failures are so blatant that they can't be suppressed or ignored that we know that the problems aren't getting better.

Conclusion

We're over 50 years into this farce, which only continues because the clown show that sucks in all the money is invisible to nearly everyone. The on-the-scene reporters who tell all the spectators what's happening on the field can't see it either. What people end up hearing are a series of fantasies intended to deliver no surprises, except in those cases where the reality can't be hidden. Mostly what people do is things that make themselves feel better while not making things better. The time for a major paradigm shift to address these problems has long since passed.

Posted by David B. Black on 04/19/2022 at 11:21 AM | Permalink | Comments (0)

The Facts are Clear: Don't Take Cholesterol-lowering Drugs

I have described the background and evidence of the diet-heart fiasco -- the hypothesis-turned-fake-fact that you shouldn't eat saturated fat because it raises your "bad" LDL cholesterol, which causes hearts disease. Not only is it wrong -- eating saturated fat is positively good for you!

This deadly farce has generated a medical effort to lower the cholesterol of patients in order to keep them healthy. There have been over a trillion dollars in sales for cholesterol-lowering statin drugs so far.The entire medical establishment has supported this as a way to prevent heart disease.There's just one little problem, now proved by extensive, objective real-world evidence and biochemical understanding: Cholesterol, including the "bad" LDL, is NOT a cause of heart disease. Even indirectly. Lowering LDL via diet change or statins does NOT prevent heart disease. So don't avoid saturated fats or take statins!

Here's the kicker: higher cholesterol is associated pretty strongly with living longer, particularly in women! And the side effects of the drugs are widespread and serious!

Basic facts

Let's start with a few facts:

  • Eating fat will NOT make you fat. Eating sugar will make you fat.
  • The human brain is 70% fat.
  • 25% of all cholesterol in the body is found in the brain.
  • All cells in your body are made of fat and cholesterol.
  • LDL is not cholesterol! HDL isn't either! They are proteins that carry cholesterol and fat-soluble vitamins. Lowering it lowers your vitamins.

To get the big picture about the diet-heart hypothesis (the reason why you're supposed to take statins in order to lower your cholesterol in order to prevent heart disease), see this post on the Whole Milk Disaster. For more detail, see the post on why you should eat lots of saturated fat.

To get lots of detail, read this extensive review of Cholesterol Con and this extensive review of The Clot Thickens -- and by all means dive into the books. Here is an excellent summary written by an MD explaining the situation and the alternative thrombogenic hypothesis.

The Bogus Hyposthesis

How did thing get started? Stupidity mixed with remarkably bad science. Here is a brief summary of a PhD thesis examination of the build-up to the Cholesterol-is-bad theory:

The cholesterol hypothesis originated in the early years of the twentieth century. While performing autopsies, Russian pathologists noticed build-up in the arteries of deceased people. The build-up contained cholesterol. They hypothesised that the cholesterol had caused the build-up and blocked the artery leading to a sudden death (the term “heart attacks” was not much used before the end of World War II).

An alternative hypothesis would be that cholesterol is a substance made by the body for the repair and health of every cell and thus something else had damaged the artery wall and cholesterol had gone to repair that damage. This is the hypothesis that has the memorable analogy – fire fighters are always found at the scene of a fire. They didn’t cause the fire – they went there to fix it. Ditto with cholesterol. The alternative hypothesis did not occur to the pathologists by all accounts.

The pathologists undertook experiments in rabbits to feed them cholesterol to see if they ‘clogged up’ and sure enough they did. However, rabbits are herbivores and cholesterol is only found in animal foods and thus it’s not surprising that feeding animal foods to natural vegetarians clogged them up. When rabbits were fed purified cholesterol in their normal (plant-based) food, they didn’t clog up. That should have been a red flag to the hypothesis, but it wasn’t.

Then Ancel Keys got involved, and the bad idea became gospel.

Population studies

Before taking drugs like statins to reduce cholesterol, doesn't it make sense to see if people with lower cholesterol lead longer lives? The question has been examined. Short answer: people with higher cholesterol live longer. 

Here is data from a giant WHO database of cholesterol from over 190 countries:

Men

More cholesterol = longer life for men, a strong correlation. Even more so for women, who on average have HIGHER cholesterol than men:

Women

When you dive into specific countries and history, the effect is even more striking. Check out the Japanese paradox

To illustrate the Japanese paradox, he reported that, over the past 50 years, the average cholesterol level has risen in Japan from 3.9 mmol/l to 5.2 mmol/l. Deaths from heart disease have fallen by 60% and rates of stroke have fallen seven-fold in parallel. A 25% rise in cholesterol levels has thus accompanied a six-fold drop in death from CVD (Ref 6).

And the strange things going on in Europe led by those cheese-loving French:

The French paradox is well known – the French have the lowest cardiovascular Disease (CVD) rate in Europe and higher than average cholesterol levels (and the highest saturated fat consumption in Europe, by the way). Russia has over 10 times the French death rate from heart disease, despite having substantially lower cholesterol levels than France. Switzerland has one of the lowest death rates from heart disease in Europe with one of the highest cholesterol levels.

Hard-core RCT's (Randomized Controlled Trials)

RCT's are the gold standard of medical science and much else. You divide a population into a control group for which nothing changes and a test group, which is subjected to the treatment you want to test. It's hard to do this with anything like diet! But it has been done in controlled settings a few times at good scale. The results of the RCT's that have been done did NOT support the fat-cholesterol-heart-disease theory and so were kept hidden. But in a couple cases they've been recovered, studied and published.

A group of highly qualified investigators has uncovered two such studies and published the results in the British Medical Journal in 2016: "Re-evaluation of the traditional diet-heart hypothesis: analysis of recovered data from Minnesota Coronary Experiment (1968-73)." They summarize the results of their earlier study:

Our recovery and 2013 publication of previously unpublished data from the Sydney Diet Heart Study (SDHS, 1966-73) belatedly showed that replacement of saturated fat with vegetable oil rich in linoleic acid significantly increased the risks of death from coronary heart disease and all causes, despite lowering serum cholesterol.14

Lower cholesterol meant greater risk of death. Clear.

The Minnesota study was pretty unique:

The Minnesota Coronary Experiment (MCE), a randomized controlled trial conducted in 1968-73, was the largest (n=9570) and perhaps the most rigorously executed dietary trial of cholesterol lowering by replacement of saturated fat with vegetable oil rich in linoleic acid. The MCE is the only such randomized controlled trial to complete postmortem assessment of coronary, aortic, and cerebrovascular atherosclerosis grade and infarct status and the only one to test the clinical effects of increasing linoleic acid in large prespecified subgroups of women and older adults.

Moreover, it was sponsored by the most famous proponent of the diet-heart hypothesis: Ancel Keys. So what happened? Here's a brief summary from an article in the Chicago Tribune after the 2016 BMJ study was published:

Second, and perhaps more important, these iconoclastic findings went unpublished until 1989 and then saw the light of day only in an obscure medical journal with few readers. One of the principal investigators told a science journalist that he sat on the results for 16 years and didn't publish because "we were just so disappointed in the way they turned out."

From the BMJ 2016 paper:

The traditional diet heart hypothesis predicts that participants with greater reduction in serum cholesterol would have a lower risk of death (fig 1⇑, line B). MCE participants with greater reduction in serum cholesterol, however, had a higher rather than a lower risk of death.

...

The number, proportion, and probability of death increased as serum cholesterol decreased

Wowza. The "better" (lower) your blood cholesterol levels, the more likely you were to die. In fact, "For each 1% fall in cholesterol there was a 1% increase in the risk of death."

Problems with Statins

Not only do statins not work to lengthen lives, taking them is a bad idea because of their side effects. This is a starting place. For example, check the side effects of a leading statin:

11

Good effects vs. side effects

We know for a fact that lowering your blood cholesterol is a bad idea. We know the drugs that do it have side effects. It's natural to think that the drugs normally do their thing and in rare cases there are side effects. Often, this is far from the truth. Here are excerpts from an article that explains the basic medical math concept of NNT

Most people have never heard the term NNT, which stands for Number Needed to Treat, or to put it another way, the number of people who need to take a drug for one person to see a noticeable benefit. It's a bit of a counterintuitive concept for people outside medicine, since most people probably assume the NNT for all drugs is 1, right? If I'm getting this drug, it must be because it is going to help me. Well, wrong.

What about the side effects of statins?

Many people who take the drug develop chronic aches and pains. The drug also causes noticeable cognitive impairment in a proportion of those taking it, and some even end up being diagnosed with dementia - how big the risk is unfortunately isn't known, because proper studies haven't been carried out that could answer that question. Additionally, the drug causes blood sugar levels to rise, resulting in type 2 diabetes in around 2% of those taking the drug - it is in fact one of the most common causes of type 2 diabetes.

NNT applied to statins:

Well, if you've already had a heart attack, i.e. you've already been established to be at high risk for heart attacks, then the NNT over five years of treatment is 40. In other words, 39 of 40 people taking a high dose statin for five years after a heart attack won't experience any noticeable benefit. But even if they're not the lucky one in 40 who gets to avoid a heart attack, they'll still have to contend with the side effects.

How many patients are told about NNT? If you haven't had a heart attack, the NNT is vastly greater than 40, and yet statins are prescribed when cholesterol is "too high" no matter what. Many of the side effects happen in 10% of the cases, which is four times greater than the number of people who are "helped." Doctors who do this are indeed members of the "helping profession;" the question is, who exactly are they helping?

Conclusion

If you value science, you should not worry about lowering your cholesterol. If you value your life and health, you should be happy to have high cholesterol. Likewise, you should avoid taking cholesterol-lowering drugs because in the end they hurt you more than they help you. If you're worried about pharma companies losing profits, it's a much better idea to just send them a monthly check -- forget about their drugs!

 

Posted by David B. Black on 04/08/2022 at 06:17 PM | Permalink | Comments (0)

Software Programming Language Cancer Must be Stopped!

Human bodies can get the horrible disease of cancer. Software programming languages are frequently impacted by software cancer, which also has horrible results.

There are many kinds of cancer, impacting different parts of the body and acting in different ways. They all grow without limit and eventually kill the host. Worse, most cancers can metastasize, i.e., navigate to a different part of the body and start growing there, spreading the destruction and speeding the drive towards death.

Software cancer impacts software languages in similar ways. Once a software programming language has been created and used, enthusiasts decide that the language should have additional features, causing the language to grow and increase in complexity. The language grows and grows, like a cancer. Then some fan of the language, inspired by it in some strange way, thinks a brand-new language must be created, derived from the original but different. Thus the original language evolves into a new language, which then itself tends to have cancerous growth.

Like cancer in humans, programming language cancer leaves a trail of death and destruction in the software landscape. We must find a way to stop this cancer and deny its self-promoting lies that it’s “improving” the language it is destroying.

Programming language origins and growth

All computers have a native machine language that controls how they work. The language is in all cases extremely tedious for humans to use. Solutions for the tedium were invented in the early days of computing, which enabled programmers using the new languages to think more rapidly and naturally about the data they read, manipulated and put somewhere.

Each of the new languages was small and primitive when it was “born.” As the youthful language tried getting somewhere, it struggled to first crawl, then stand with help and finally to walk and run. Growth in the early years was natural and led to good results. Once each new language reached maturity, however, cancer in its various forms began to set in, causing the language to grow in weight and correspondingly to lose strength, agility and overall health.

I have described the giant early advances in language and reaching maturity with the invention of high level languages. After early maturity, a couple small but valuable additions to languages were made to enhance clarity of intention.

The ability to create what amounts to “habits” (frequently used routines) were an important part of the language maturation process. The more valuable such routines were added to libraries so that any new program that needed them could use them with very little effort. There were a couple of valuable languages created that went beyond 3-GL’s, languages that were both popular and highly productive.  It’s a peculiarity of programming language evolution that these languages didn’t become the next-generation mainstream.

That should have been pretty much it! You don’t need a new language to solve a new problem! Or an old problem.

Languages exhibit cancerous growth

In the early days of languages, it made sense that they didn’t emerge as full-grown, fully-capable “adults.” But after a few growth spurts, languages reached maturity and were fully capable of taking on any task – as shown for example, by the huge amounts of COBOL performing mission-critical jobs in finance, government and elsewhere, and by the fact that the vast, vast majority of web servers run on linux, written in plain-old C. The official language definitions in each case have undergone cancerous growth, ignored by nearly everyone sensible. For example, newer versions of COBOL incorporate destructive object-oriented features. Of course it’s the fanatics that get themselves onto language standardization committees and collaborate with each other to get useless but distracting jots and tittles added that endlessly complicate the language, making it harder to read, write and maintain.

Languages metastasize

There is plain old ordinary cancer, in which language cultists get passionate about important “improvements” that need to be made to a language. Then there are the megalomaniac language would-be-gurus who decide that some existing language is too flawed to improve and needs full-scale re-creation. Those are the august new-language creators, who make up some excuse to create a “new” language, which invariably takes off from some existing language. This has led to hundreds of “major” languages and literally thousands of others that have been invented and shepherded into existence by their ever-so-proud creators. Most such language "inventors" like to ignore the origins of their language, emphasizing its creativity and newness.

Someone might say they’ve “invented” a language, but the reality is that the invention is always some variation on something that exists. In some cases the variation is made explicit, as it was with the verbose and stultifying variation of C called C++, which hog-tied the clean C language with a variety of productivity-killing object-oriented features. And then went on to grow obese with endless additions.

Purpose-driven programming language cancers

There is no unifying theme among the cancers. But high on the list is to somehow improve programmer productivity and reduce error by inventing a language with features that will supposedly accomplish that and similar goals. Chief among these purposes is the object-oriented languages, which have themselves metastasized into endless competing forms. Did you know that using a good OO language like Java results in fewer bugs? Hey, I've got this bridge to sell, real cheap! Functional languages keep striving to keep up with the OO crew for creating the most confining, crippling languages possible. It's a close race!

The genealogy of programming languages

Everyone who studies programming languages sees that there are relationships between any new language and its predecessors. When you look at the tree of language evolution, it’s tempting to compare it to the tree of biological evolution, with more advanced species evolving from earlier, less advanced ones. Humanoids can indeed do much more than their biological ancestors.

That’s what the “parents” of the new languages would have you believe. Pfahh!

I have described the explosive growth of programming languages and some of the pointless variations. But somehow programmers felt motivated to invent language after language, to no good end. Just as bad, programmers decided that existing languages needed endless new things added to them, often copying things from other languages in a crazed effort to “keep up,” I guess.

Various well-intentioned efforts were made to prove the wonderfulness of the newly invented languages by using them to re-write existing systems. These efforts have largely failed, demonstrating the pointlessness of the new languages. There was a notable success: a major effort to re-write a production credit card system in assembler language to supposedly bad, old COBOL!

How to stop language cancer

Unless we want to continue the on-going cancerous growth and metastasizing of software languages, we need to ... cure the cancer! Just STOP! Easy to say, when a tiny minority of crazed programmers around the globe without enough useful work to keep them from causing trouble keep driving the cancer. There is a solution, though.

The first and most important part of the solution is Science. You know, that thing whose many results, along with effective engineering, created the devices on which we use software languages. Software is very much a pre-scientific discipline. There isn't even a way to examine evidence to decide whether one language is better than another. What is called "Computer Science" isn't scientific or even practical, as a comparison to medical science makes clear.

The second path to a solution is to focus on status in software. Today, software people gain status in peculiar ways; usually the person with the greatest distance between their work and real people who use software has the highest status. A language "inventor" as about as far as you can get from real people using the results of software efforts. As soon as people contributing to software cancer are seen as frivolous time-wasters the better off everyone will be.

What's the alternative to language cancer?

The most important alternative is to cure it, as expressed above. The most productivity-enhancing effort is to focus instead on libraries and frameworks, which are the proven-in-practice way to huge programmer productivity gains. The "hard" stuff you would otherwise have to program is often available, ready to go, in libraries and frameworks. They are amazing.

Finally, focusing on the details of language is staying fixed at the lowest level of program abstraction, like continuing to try to make arithmetic better when you would be worlds better off moving up to algebra.

Conclusion

Software language cancer is real. It's ongoing. the drivers of software language cancer continue to fuel more cancer by honoring those who contribute to it instead of giving them the scorn they so richly deserve. Software would be vastly better off without this horrid disease.

Posted by David B. Black on 04/05/2022 at 10:02 AM | Permalink | Comments (0)

My Health Insurance Company Tries to Keep me Healthy

I am grateful to have the health insurance I have, and grateful for the payments they've made to resolve problems I've had. Nonetheless, I can't help but be astounded at the never-ending flow of expensive, incompetent, annoying and utterly useless interaction I have had with the company's computer systems. It's small potatoes in the overall scheme of things. It's also simple stuff. Why can't they (and others like them) get it right?

The answer is simple: the company's leaders, like most enterprise companies, want to be leaders in technology. Today, that means funding big, publicized  initiatives in AI and ML. Initiatives that will, of course, transform healthcare. Soon. Getting email right? Getting paper mail right? Trivial stuff. Wins no awards, gets no attention. It's unworthy of attention, like the way the rich owners of a grand house with a dozen servants wouldn't stoop to paying attention to the brand of cleaning products they used.

The email

An email from Anthem showed up in my inbox with the subject "Schedule your checkup now -- at no extra cost." Naturally I open it. Right away there's a graphic, demonstrating that it wasn't just the software team on this job:

Anthem 1

The message with the graphic repeats the message in the subject line, strengthening it -- don't just schedule a checkup, schedule it early. Why should I do this? "It's a good way to stay on top of current health issues and take care of any new ones early, before they become more serious."

Sounds good! Except that the very next thing in this email urging me to "schedule [my] checkup early this year. There's no extra cost." is this:

Anthem 2

My plan "usually" covers it?? WTF?? Right after telling me "There's no extra cost," as in There IS no extra cost??

Then comes "You may pay a copay, percentage of the cost, or deductible if you've already had your physical for the year or if the visit is to diagnose an issue and set a plan for treatment or more tests."

I'm supposed to schedule it "early." I last had an annual physical six months ago.  Is a physical I schedule now, in March, free or not? At the bottom of the email there is a nice big box that says in big type "Schedule your checkup today." It then says "To find a doctor or check what your plan covers, please use the Sydney Health mobile app or visit Anthem.com."

I've already done the Sydney trip, describing it here. Not going there again. I'll go to the main site. I'll spare you the details. They don't know who my primary care doctor is and don't let me tell them. They give me a big list of doctors I could visit, most of whom are pediatric -- oh, yeah, good suggestions, Anthem! They must think I'm young for my age ... or something.

Then I try to find out what my plan covers, as they suggest. Nothing about annual checkups being free of charge; it's all about co-pays. Maybe it's there somewhere, but I can't find it. As usual, the link Anthem provides is to the front door of the site, not a cool new twenty-year-old technology called "deep linking," which brings me right to the relevant place. Maybe next year. Or decade. Or century.

What could have happened

There's a concept that's been around in the industry for a couple decades called "personalization." It includes things like

  • when you send an email, address it to the person, instead of making the email be like a brochure.
  • reflect basic knowledge of the person, like whether they had an annual checkup last year -- if they did, maybe they already think it's a good thing, and the message should be to be sure to do it again
    • They've got my history -- they could praise me for getting checkups for the last X years, and reminding me to keep up the good work.
  • Is the checkup "no cost" or not? Anthem has my account information, name, address and the rest. They have my plan. They know whether it's free or not. They just don't bother to check.
    • Taking my history into account, they could say that, just as last year's checkup was 100% free, this one will be too.
  • As it happens, a week before getting the email I saw my primary care physician and then a specialist who submitted pre-auths for tests. Anthem has the visit claims and pre-auths. I'm doing exactly what they want me to do, as they said in the email, "take care of any new ones early, before they become more serious." Instead, what I hear from Anthem is 100% clueless -- exhorting me to do something that the slightest bit of effort on their part would tell them I'm already doing! Blanketty-blank it!

This is customer interaction 1.01. It's also common sense. It's standard practice for companies whose tech and marketing teams have progressed past the year 2000 into the current century.

The Postcard in the mail

You might think it couldn't get worse. You'd be wrong.

After I got the email, a postcard showed up in the regular US mail. A full-color postcard from my friends at Anthem! Here's the front of it, showing a person who looks just like me having a virtual doctor visit.

Screenshot 2022-04-03 161825

Anthem cares about my health and really wants me to get that checkup -- today! They care about it so much that they appear to have two whole departments, one for email and one for postal paper mail, each charged with getting me to get that checkup.

So what do they tell me on the back? Take a look:

Screenshot 2022-04-03 161951

Here's what the email said:

It's a good way to stay on top of current health issues and take care of any new ones early, before they become more serious.

Here's what the postcard said:

Having a checkup is one of the best ways to stay on top of current health issues and take care of any new ones early, before they become more serious.

Notice the similarities and the subtle differences -- it's clear that each department wanted to assert its independence and word the exhortation in the way it thought best. The email modestly said "it's a good way," while the postcard went all the way, saying it's "one of the best ways." How much time in meetings was spent getting the wording exactly right, do you think?

Last but not least is the issue of cost. Like with the email, the postcard strongly asserted that the cost is completely covered. But then there's that little asterix, hinting that you might want to look at the tiny little print at the bottom of the page, where you find maybe it's not free after all. At least there was no mention of Sydney. I guess the paper mail department is jealous, and wants to avoid promoting the thing those snotty folks in IT keep yammering on about.

Anthem Leads the way

You might think from this that Anthem is incapable of going beyond the 1-2 punch of emails AND mass paper mailing. Incapable of doing basic software of the kind I was writing in high school, software that is little but common sense. I will let the evidence speak for itself.

Whatever Anthem may or may not be doing in terms of keeping up with paper mail and adding an electronic version, a little searching reveals that Anthem is spending huge amounts of time and money on "advanced digital" whatever, fashionable things like AI, ML and the rest of the lah-de-lah.

To discover Anthem's strategy, you have to find and sift through an array of websites that aren't the Anthem.com one you would think.

Here is part of what the Anthem CEO says in the most recent annual report: "The traditional insurance company we were has given way to the digitally-enabled platform for health we are becoming. This platform strategy is grounded in data and deploys predictive analytics, artificial intelligence, machine-learning and collaboration across the value chain to produce proactive, personalized solutions for our consumers, care providers, employers, and communities." I guess that means they're working on getting AI and ML to send me an email that's "personalized" sometime soon. Maybe.

Anthem has a Chief Digital Officer. Here's what he said in that same annual report: "At Anthem, we have built the industry’s largest platform, integrating our immense data assets, proprietary AI, and machine-learning algorithms." Is this just a lab project? No! "It’s through this platform that we are able to digitize knowledge and create a more agile and seamless experience for our consumers, customers ..." I guess digitizing my name and slipping it into an "agile and seamless" email to me is right around the corner!

In May 2020 Anthem signed a major "digital transformation" deal with IBM. According to Anthem's CIO Tim Skeen, "We are seeing a dynamic change in the healthcare industry, requiring us to be more agile and responsive, utilizing advanced technology like Artificial Intelligence (AI) to drive better quality and outcomes for consumers." Sounds good! If IBM's Watson AI can beat the world champion Ken Jennings at Jeopardy, I guess it's just a matter of time until it figures out how to personalize emails.

A glowing article last year quoting the Anthem Chief Technology Officer described how Deloitte and AWS are helping Anthem deliver "measurable benefits" such as "capabilities that use AI/ML, cognitive, analytics, and data science" to implement their strategic vision, one of whose key tenets is " 'n=1' personalization through consumer-driven whole-health products and services." Is it possible that the strategic vision of "n=1" personalization will enable them to send me an email that's to me, instead of a brochure? We'll see.

At yet another website of Anthem's I discovered that they have a Product Management and Strategy Lead who talks about how Anthem is "using predictive models and machine learning to provide consumers with the unique information, programs, and services they need ..." There's a VP of AI Technology who is "harnessing machine learning and AI ..." There's a VP of Innovation who is "... implementing innovative solutions ..."

What a wealth of important people and efforts, all bringing digital transformation to Anthem! With all this industry-leading technology, it's only a matter of time before I receive something from Anthem that isn't a postcard with the added bonus of a digital brochure, do you think?

Conclusion

See this post and the summary at the end for links to other amazing achievements of the Anthem software team -- which extends from bad communications to it's-really-bad cyber-security involving massive losses of customer personal information.

It's clear that Anthem, like most companies of its kind, pays huge amounts of attention to the current "thing," whatever that is, making sure everyone knows they're leading the way. Meanwhile, they largely ignore trivial things that are "beneath" them, things like treating customers moderately well. It starts with avoiding paying attention to the foundation of everything, which is data. Then it's compounded by the perverse status hierarchy in software in general and data science in particular; the hierarchy is simple: the farther you are away from real human customers, the higher your status. I hope this will change, but I'm not betting on it. Meanwhile, I remain grateful for the payments they make for the health care services I receive.

Posted by David B. Black on 04/03/2022 at 05:51 PM | Permalink | Comments (0)

The Facts are Clear: Eat Lots of Saturated Fat

The experts and authoritative institutions are clear: you should eat a low-fat diet and take drugs to reduce your blood LDL cholesterol to safe levels in order to make your heart healthy.  Here is their advice about saturated fat and about blood cholesterol. The capital-E Experts are wrong. They were wrong from the beginning. There was never any valid evidence in favor their views, in spite of what you might read. The quantitative and biochemical evidence is now overwhelming.  Here is my summary of the situation. In this post I’ll cover more of the evidence.

Origins and growth of the saturated fat – cholesterol – heart hypothesis

How did such a bogus theory get started? An experiment with intriguing results was one start. Here's a summary:

The hypothesis harks back to the early part of the twentieth century, when a Russian researcher named Nikolai Anitschkow fed a cholesterol [animal fat] rich diet to rabbits and found that they developed atherosclerosis (hardening of the arteries, the process which in the long run leads to cardiovascular disease). … Rabbits, being herbivores, normally have very little cholesterol in their diets, while humans, being omnivores, generally consume quite a bit of cholesterol. Regardless, the data was suggestive, and led to the hypothesis being formulated.

A paper titled “How the Ideology of Low Fat Conquered America” was published in the Journal of the History of Medicine and Allied Sciences in 2008. Here is the abstract:

This article examines how faith in science led physicians and patients to embrace the low-fat diet for heart disease prevention and weight loss. Scientific studies dating from the late 1940s showed a correlation between high-fat diets and high-cholesterol levels, suggesting that a low-fat diet might prevent heart disease in high-risk patients. By the 1960s, the low-fat diet began to be touted not just for high-risk heart patients, but as good for the whole nation. After 1980, the low-fat approach became an overarching ideology, promoted by physicians, the federal government, the food industry, and the popular health media. Many Americans subscribed to the ideology of low fat, even though there was no clear evidence that it prevented heart disease or promoted weight loss. Ironically, in the same decades that the low-fat approach assumed ideological status, Americans in the aggregate were getting fatter, leading to what many called an obesity epidemic. Nevertheless, the low-fat ideology had such a hold on Americans that skeptics were dismissed. Only recently has evidence of a paradigm shift begun to surface, first with the challenge of the low-carbohydrate diet and then, with a more moderate approach, reflecting recent scientific knowledge about fats.

The early chapters of The Big Fat Surprise book provide a good summary with details of the rise to dominance of the low-fat & cholesterol-is-bad theory.

Strong Data Showing that Saturated Fat is Good

There were problems with the diet-heart hypothesis from the beginning.

The first chapters of The Big Fat Surprise have summaries of studies that were made on peoples around the world who subsisted almost exclusively by eating animals and/or dairy, all of them strongly preferring fatty organs over lean muscle.

A Harvard-trained anthropologist lived with the Inuit in the Canadian Arctic in 1906, living exactly like his hosts, eating almost exclusively meat and fish. “In 1928, he and a colleague, under the supervision of a highly qualified team of scientists, checked into Bellevue Hospital  … to eat nothing but meat and water for an entire year.” “Half a dozen papers published by the scientific oversight committee that scientists could find nothing wrong with them.”

George Mann, a doctor and professor of biochemistry, took a mobile lab to Kenya with a team from Vanderbilt University in the 1960’s to study the Masai. They ate nothing but animal parts and milk. Their blood pressure and body weight were 50% lower than Americans. Electrocardiograms of 400 men showed no evidence of heart disease, and autopsies of 50 showed only one case of heart disease.

Similar studies and results came from people in northern India living mostly on dairy products, and native Americans in the southwest. There were many such studies, all of them showing that the native peoples, eating mostly saturated fat, were not only heart-healthy, but free of most other modern afflictions such as cancer, diabetes, obesity and the rest.

Of course the question was raised of other factors that might lead to these results. The questions have been answered by intensive studies. For example, some formerly meat-eating Masai moved to the city and lost their health. For example, Inuit who changed their diet to include lots of carbohydrates supplied by government were studied by doctors who determined they lost their health.

From the book:

In 1964, F. W. Lowenstein, a medical officer for the World Health Organization in Geneva, collected every study he could find on men who were virtually free of heart disease, and concluded that their fat consumption varied wildly, from about 7 percent of total calories among Benedictine monks and the Japanese to 65 percent among Somalis. And there was every number in between: Mayans checked in with 26 percent, Filipinos with 14 percent, the Gabonese with 18 percent, and black slaves on the island of St. Kitts with 17 percent. The type of fat also varied dramatically, from cottonseed and sesame oil (vegetable fats) eaten by Buddhist monks to the gallons of milk (all animal fat) drunk by the Masai. Most other groups ate some kind of mixture of vegetable and animal fats. One could only conclude from these findings that any link between dietary fat and heart disease was, at best, weak and unreliable.

One of the foundational studies in the field is the Framingham Heart Study, started in 1948 and still going on.

In 1961, after six years of study, the Framingham investigators announced their first big discovery: that high total cholesterol was a reliable predictor for heart disease.

This cemented things. Anything that raised cholesterol would lead to heart disease. The trouble came thirty years later, after many of the participants in the study had died, which made it possible to see the real relationship between cholesterol and mortality due to heart disease. Cholesterol did NOT predict heart disease!

The Framingham data also failed to show that lowering one's cholesterol over time was even remotely helpful. In the thirty-year follow-up report, the authors state, "For each 1% mg/dL drop of cholesterol there was an 11% increase in coronary and total mortality."

Only in 1992 did William P. Castelli, a Framingham study leader, announce, in an editorial in the Archives of Internal Medicine:

In Framingham, Mass, the more saturated fat one ate ... the lower the person's serum cholesterol ... and [they] weighed the least.

Game over! No wonder they've kept it quiet. And not just about heart health -- about weight loss too!

Here is an excellent article with references to and quotes from many journals. Here is the introduction:

Many large, government-funded RCTs (randomized, controlled clinical trials, which are considered the ‘gold-standard’ of science) were conducted all over the world in the 1960s and 70s in order to test the diet-heart hypothesis. Some 75,000 people were tested, in trials that on the whole followed subjects long enough to obtain “hard endpoints,” which are considered more definitive than LDL-C, HDL-C, etc. However, the results of these trials did not support the hypothesis, and consequently, they were largely ignored or dismissed for decades—until scientists began rediscovering them in the late 2000s. The first comprehensive review of these trials was published in 2010 and since then, there have been nearly 20 such review papers, by separate teams of scientists all over the world.

Far from believing that saturated fat causes heart disease, we can be quite certain that it's positively healthy on multiple dimensions to eat it -- it's people who don't eat enough saturated fat who end up overweight and sickly!

Sadly, there are still Pompous Authorities who assure us with fancy-sounding studies that we really should avoid eating fat. This study from 2021 dives into just such a fake study -- a RCT (random controlled trial) study -- that purported to show that eating fat remains a bad idea. Wrong. Here's the summary:

Hiding unhealthy heart outcomes in a low-fat diet trial: the Women’s Health Initiative Randomized Controlled Dietary Modification Trial finds that postmenopausal women with established coronary heart disease were at increased risk of an adverse outcome if they consumed a low-fat ‘heart-healthy’ diet.

These books by Dr. Malcolm Kendrick dive in more deeply and are moreover a pleasure to read. Among other things, The Clot Thickens explains the underlying mechanisms of arteriosclerosis (blood clots, heart disease) and what actually causes them.

Here are several articles with evidence from many scientists on the subject of saturated fat.

Conclusion

This is an incredibly important issue regarding the health of people. It's also an in-progress example of the difficulty of shifting a paradigm, even when the evidence against the dominant paradigm (avoid eating saturated fat, use drugs to keep your cholesterol low) is overwhelming. Could it be possible that billions of dollars a year of statins and related cholesterol-lowering drug sales has something to do with it? Then again, when was the last time you heard a prestigious Expert or institution say "Sorry, we were wrong, we'll try hard not to blow it again; we won't blame you if you never trust us again."

Posted by David B. Black on 03/15/2022 at 10:51 AM | Permalink | Comments (0)

What is Behind the DCash Central Bank Digital Currency Disaster?

DCash, the Digital Currency issued by the ECCB (Eastern Caribbean Central Bank) Is a pioneering effort with good intentions. Here is the background, covering how it was studied carefully, piloted in March 2019, had its first live transaction in February 2021, rolled out in March 2021, expanded in July 2021 and then, on January 14, 2022, went dead. Not just down for a few hours ... or days ... or weeks ... but long enough for any sensible person to completely give up on it. Then the ECCB announced that DCash would be back soon, and then announced that it was alive and well. The ECCB is lah-dee-dah, yes we had an "interruption" in service, but we're back better than ever!

What if someone stole your wallet and kept it from you for nearly two months? Why would any sane person convert real money to DCash if it can suddenly be stolen and held hostage for months? And not by criminals, but by the bank!

The ECCB is keeping the facts of this disaster largely hidden. I've quoted and analyzed what little they said at the time of the crash here.

Pre-announcing the Resumption

A couple days before they resumed service, ECCB announced that DCash was coming back. To regain trust and for the sake of transparency, you would think they would tell us what actually happened. Nope.

Here's their explanation:

In January 2022, the DCash system experienced its first interruption since its launch in March 2021. As a result, the processing of new transactions on the DCash network was halted. This interruption was not caused by any external intervention. The security and integrity of all DCash data, applications and architecture, including all central bank, financial institutions,  merchant and wallet apps remain secure and intact.   

Following the interruption, the ECCB took the opportunity to undertake several upgrades to the DCash platform including enhancing the system’s certificate management processes – the initial cause of the interruption, and updating the version of Hyperledger Fabric, the foundation of the DCash platform.  These upgrades have further strengthened the robust security mechanisms, which ultimately underpin the DCash technology, resulting in a more resilient product.

It "experienced its first interruption." Passive voice. Where did the "interruption" come from? Who did it? Why?

"As a result, the processing of new transactions on the DCash network was halted." As a result of what?? The processing "was halted" by whom?? The ECCB?

"This interruption was not caused by any external intervention." This implies no hacking. It was internal. Either a bad insider or something awful with the software that had (presumably) been running for months.

So they went about several "upgrades" -- not bug fixes or corrections. Then we get to "enhancing the system's certificate management process." Certificates are NOT about digital currency, they are standard web things, as I explained. And they "updated the version of Hyperledger Fabric," a standard library for blockchain. Updating to latest versions should be part of normal systems maintenance. It's not something that takes weeks! You do the upgrade, test it, run it in parallel with your current production system to assure it works, and then you seamlessly switch over. Groups large and small do this all the time. It's standard practice. Only creaky old organizations firmly anchored in the past would take a system down for hours to perform maintenance. Even they wouldn't dare take a system down for even a week!

What's the result? ECCB has now "further strengthened the robust security mechanisms ... resulting in a more resilient product." Wow. The security mechanisms either had a fault or they didn't. The claim is that it took nearly two months to create a "more resilient product." A product that had been running live for nearly a year.

Announcing the Resumption

Next ECCB declared as promised that DCash was back. They provided no further explanation:

As part of the restoration, the platform now benefits from several upgrades including an enhanced certificate management process and an updated version of the software which provides the foundation for the DCash system. Extensive testing and assurance exercises were conducted prior to restoration of the platform to ensure full functionality of the service in accordance with quality assurance specifications.

Certificate management is standard internet stuff. It has nothing to do with crypto. Why wouldn’t they already have had the latest version working as part of their system? No excuse! If they just needed to upgrade, why not do it the way everyone does? They claim to “enhance” the certificate management process. Something unique for ECCB? Bad idea.

Hyperledger fabric. Similar claims, same response.

They claim DCash is now “more resilient.” But there were no crashes during many months of operation. Therefore (according to them) DCash was already perfectly resilient.

They're hiding something. What is it??

Apps for Digital Transfer

You don't need a CBDC like DCash to quickly, easily, safely, cheaply and electronically move money around. In fact, we're all better off if central banks just ignored the whole issue. Here's my analysis of the situation, talking about a potential CBDC for the US that no one needs and describing how Venmo and CashApp work and are broadly accepted.

The ECCB made strong claims about the benefits DCash was going to bring. All benefits that are in production and use by over 100 million people, operated by private companies without a CBDC. Nonetheless they went ahead. And crashed. And clearly lie about it. What's going on??

The DCash App

As a brand-new currency, DCash needs an app. It's something the ECCB largely ignores on their self-promotional website. I wonder if there's anything to learn by digging into the DCash app? It turns out there is! Following is what I discovered.

I figured they must have a wallet app for Android. I went to the Google Play store and found the app:

Screenshot 2022-03-11 103736

Sure enough, that's the wallet. But look over there on the upper right. 40 reviews, 2 stars out of 5. That's awful!

Let's look at some of them. Sadly, Google won't give them in time order.

The first review wasn't until March 27, 5 stars.

On Aug 15 we get 1 star with the comment "Bad." No response from ECCB. Aug 31 there is 3 stars with "*yu" as the comment. No response from ECCB. Mostly it's 1 star reviews, one after the other, many with thumbs-up ratings for the badness of the review.

Months later, Dec 12, we get 2 stars and "Efgy." And a response from ECCB!

Screenshot 2022-03-11 105620

Look more closely. The review was posted Dec 12 and the response was posted nearly a month later!! Really staying on top of things, aren't they?

I see they've got a special domain for feedback. This is the first I've seen of it. You would think it would be on the main site, wouldn't you? Let's check it out. I put the support site URL in my browser and this is the result:

Screenshot 2022-03-11 105957

No, I didn't type it wrong. Even though DCash is supposedly up and running just fine, the support site isn't just broken -- it's not there! The domain doesn't exist!!

Things are clearly just awful for the Android app. I wonder how it is for iPhone -- maybe it's wonderful? Here's the preview of the DCash app on the Apple App store:

Screenshot 2022-03-11 111625

Only 5 ratings vs. the 40 ratings for Android. What's clear is that Apple users are MUCH more generous than Android. The review by Waps7777 in Dec 2021 gave it 3 stars even though "DCrash not DCash. The app crashes every time is send a payment."

Conclusion

We still have no idea what happened with DCash. But it's pretty clear from the App store comments that the currency should be called DCrash. The announcements of ECCB say nothing about the apps. The people in charge are, as usual with people in charge, going to great length to hide problems and declare wonderfulness. But with the evidence on the table to date, DCrash is a disaster and should be shut down. If the authorities cared about real human beings other than themselves, they would apologize, shut down DCash, and make a deal with Zelle, Venmo, CashApp or someone who has a track record of real success to improve the lives of the people in the EC nations.

Posted by David B. Black on 03/11/2022 at 11:33 AM | Permalink | Comments (0)

DCash Government Cryptocurrency Shows Why Fedcoin Would Be a Disaster

The United States is seriously planning to issue FedCoin, a CBDC (Central Bank Digital Currency), following the lead of the Chinese government and others around the world. I have previously spelled out why we don’t need Fedcoin, basically because the currency of the United States is already largely digital. In this article I argue that not only don’t we need FedCoin, but that issuing such a CBDC has a strong potential for disaster. For a perspective that is broad and deep on this subject, see Oonagh McDonald’s recent book Cryptocurrencies: Money, Trust and Regulation.

The Eastern Caribbean Central Bank

Did you know that in 1983 eight countries in the eastern Caribbean banded together to create a central bank with a common currency? The ECCB resembles the Federal Reserve in the US for Anguilla, Antigua and Barbuda, Commonwealth of Dominica, Grenada, Montserrat, St Kitts and Nevis, Saint Lucia, and St Vincent and the Grenadines.

The ECCB’s experiment with a Digital Currency

After considerable planning, the ECCB kicked off a pilot for a digital currency in 2019. According to their website:

The Eastern Caribbean Central Bank (ECCB) launched its historic DXCDCaribe pilot, on 12 March 2019. ‘D’, representing digital, is prefixed to ‘XCD’ - the international currency code for the EC dollar.

The pilot involves a securely minted and issued digital version of the EC dollar - DCash. The objective of this pilot is to assess the potential efficiency and welfare gains that could be achieved: deeper financial inclusion, economic growth, resilience and competitiveness in the ECCU - from the introduction of a digital sovereign currency.

DCash will be issued by the ECCB, and distributed by licensed bank and non-bank financial institutions in the Eastern Caribbean Currency Union (ECCU). It will be used for financial transactions between consumers and merchants, people-to-people (P2P) transactions, all using smart devices.

The pilot was declared a success. The phase 2 rollout of DCash started March 31, 2021.

The ECCB provides a detailed description of the excellence of the implementation and security of the DCash system. For example:

The DCash platform is being developed through security-by-design principles. Applications are subject to rigorous quality assurance, and independent security testing, prior to live deployment.  Hyperledger Fabric is being utilized to create an enterprise-grade, private-permissioned, distributed ledger (blockchain).  Modular and configurable architecture is used to facilitate DCash transfer, payment processing, and settlement across authenticated and authorized API’s. Additionally, all DCash users must be authenticated and authorized.

The application framework was designed with built-in mitigations against common web application vulnerabilities, and goes through a quality assurance process that includes rigorous security testing. Multi-factor authentication is required for financial institutions, all APIs are authenticated and authorized, and all participants are vetted. In addition, secure hardware elements are being used on mobile devices.

More details were provided to demonstrate the security and high quality of the system. In addition to unspecified data centers, the website states:

Google Cloud is the current service provider. With the exception of the minting system, all system services are hosted in Google Cloud. Connections between different system layers is secure (SSL/HTTPS) and permissioned (IP Address restrictions, username/ passwords, and JWT tokens).

There’s a Problem

So what happened to this wonderful, highly secure digital currency? It went down!

The ECCB announced on January 14, 2022 that there was a system-wide outage.

This break in service has been caused by a technical issue and the subsequent necessity for additional upgrades. Therefore, DCash transactions are not being processed at this time.

There were lots of words about how things would be OK.

Did it go down for an hour? Bad. A day? REALLY bad. A week or more? A complete, unmitigated, no-excuses disaster.

What if you were a user of DCash and you couldn’t use it? It would be like having money in your bank account, but the bank claims it’s unable to give you any! What are you supposed to do? To whom can you appeal? No one!

It’s worse than that. As this writing at the end of February, a full six weeks after DCash D-Crashed, it’s still down.

Why did DCash go down?

We don’t know much. In early February it was reported:

The Eastern Caribbean Central Bank has revealed that an expired certificate caused its pilot central bank digital currency (CBDC), DCash, to go offline from January 14. Karina Johnson, the ECCB project manager for the DCash pilot, told Central Banking that “the version of Hyperledger Fabric (HLF)”, the network that hosts DCash’s distributed ledger, “had a certificate expire”. To install an up-to-date certificate, the currency’s operators are undertaking “a version change of HLF and associated...

This is really strange. If the language used is correct, a “certificate expiration” has nothing to do with digital currency or blockchain. An expired certificate is something that is issued by a “certificate authority” It’s used all over the web. For example, most web addresses start with https://www. Etc. The “s” means secure, which means that the traffic between your browser and the website is encrypted. When a browser sees the https, it goes to the site, which sends a certificate issued by a CA (certificate authority) that says that the public/private key pair used by the site is legit.

There are NO certificate authorities in Bitcoin or other cryptos! There are just public/private key pairs, with the private key being used to “sign” a transaction sending Bitcoin from the corresponding public key – which assures that it really is the owner of the public key sending the BTC.

So what's going on and how could a "certificate expiration" have caused this? No one is saying. By the way, a expiration of this kind can normally be fixed very quickly, less than a day.

The next (and most recent as of this writing) thing that was publicly announced was this on Facebook on February 14:

Screenshot 2022-02-28 114428

Why did DCash go down? Why is it still down after all this time? How are the consumers and merchants being helped with their funds being locked and inaccessible? No one is talking.

Conclusion

ECCB seems to have done everything right. They carefully studied. They worked with an experienced vendor, who had experience doing CBDC. They used the leading blockchain fabric. They used Google for hosting. They did a limited trial, released it in one of their regions, and then made it more widely available. And then something went wrong. Very wrong. What it could possibly be that involves "certificates expiring" is mysterious. How they could have built something that could be dead for over six weeks is extremely rare in software.

CBDC's are a terrible idea. We don't need them. They add nothing in terms of cost or speed to the digital fiat currency and associated software that we already have. How can any government guarantee that they won't have a DCash disaster when their own CBDC rolls out? So governments are suddenly wonderful bringing out great software that works? I've got this bridge, by the way, and I can let you have it for a limited-time-only bargain price...

Note: this was originally posted at Forbes.

 

Posted by David B. Black on 03/09/2022 at 10:49 AM | Permalink | Comments (0)

Why Object-Orientation in Software is Bad

What?? Object-oriented programming (OOP) is practically the standard in software! It’s taught everywhere and dominates thinking on the subject. Most languages are O-O these days, and OO features have even been added to COBOL! How can such a dominant, mainstream thing be bad?

The sad truth is that the badness of OOP isn’t some fringe conspiracy theory. An amazing line-up of astute, brilliant people agree that it’s bad. A huge collection of tools and techniques have been developed and taught to help people overcome its difficulties, which nonetheless persist. Its claims of virtue are laughable – anyone with experience knows the benefits simply aren’t there.

Object-oriented languages and novels

Object-orientation is one of those abstruse concepts that makes no sense to outsiders and is a challenge for people learning to program to understand and apply. To make OOP monstrosity clear, let’s apply OOP thinking to writing a novel.

There are lots of ways of writing novels, each of them suitable for different purposes. There are novels dominated by the omniscient voice of the author. There are others that are highly action-based. Others have loads of dialog. Of course most novels mix these methods as appropriate.

Some novels feature short chapters, each of which describes events from a particular character’s point of view. There aren’t many novels like this, but when you want to strongly convey the contrast between the contrasting experiencing of the characters, it’s a reasonable technique to use, at least for a few chapters.

What if this were the ONLY way you were allowed to write a novel??!! What if a wide variety of work-arounds were developed to enable a writer to write  -- exclusively! -- with this sometimes-effective but horribly constricting set of rules?

What if ... the Word Processors (like Microsoft Word) from major vendors were modified so that they literally wouldn't allow you to write in any other way, instead of giving you the freedom to construct your chapters any way you wanted, with single-person-point-of-view as one of many options. What if each single small deviation from that discipline that you tried to include were literally not allowed by the Word Processor itself!? All this because the powerful authorities of novel creation had decided that single-person chapters were the only good way to write novels, and that novelists couldn't be trusted with tools that would allow them to "make mistakes," i.e., deviate from the standard.

There would be a revolution. Alternative publishing houses would spring up to publish the great novels that didn’t conform to the object-novel constraints. The unconstrained books would sell like crazy, the OO-only publishing houses would try to get legislation passed outlawing the unconstrained style of writing, and after some hub-bub, things would go back to normal. Authors would exercise their creative powers to express stories in the most effective ways, using one or several techniques as made sense. The language itself would not be limited or limiting in any way.

Sadly, the world of software works in a very different way. No one sees the byzantine mess under the “hood” of the software you use. No one knows that it could have been built in a tiny fraction of the time and money that was spent. Industry insiders just accept the systematized dysfunction as the way things are.

This is objects – a special programming technique with narrow sensible application that has been exalted as the only way to do good programming, and whose rules are enforced by specialized  languages only capable of working in that constrained way.

Is there nothing good about OOP?

It isn’t that OOP is useless. The concept makes sense for certain software problems – just as completely other, non-OOP concepts make sense for other software problems! A good programmer has broad knowledge and flexible concepts about data, instructions and how they can be arranged. You fit the solution to the problem and evolve as your understanding of the problem grows, rather than starting with a one-size-fits-all template and jamming it on. You would almost never have good reason to write a whole program in OO mode. Only the parts of it for which the paradigm made sense.

For example, it makes sense to store all the login and security information about a body of software in a single place and to have a dedicated set of procedures that are the only ones to access and make changes. This is pure object-orientation – only the object’s methods access the data. But writing the whole program in this way? You're doing nothing but conforming to an ideology that makes work and helps nothing.

However. When you embody OOP in a language as the exclusive way of relating data and code, you’re screwed.

In this post I describe the sensible origins of object-orientation for describing physical simulation, for example for ships in a harbor. Having a whole language to do it was overkill – I describe in the post how hard-coding the simulation in statements in a language made it hard to extend and modify instead of moving the model description into easily editable metadata -- and then into a provably best optimization model..

That is the core problem with object-oriented languages – they are a hard-coded solution to part of a programming problem, rather than one which creates the most efficient and effective relationships between instructions and data and then increasingly moves up the mountain of abstraction, each step making the metadata model more powerful and easier to change. Object-oriented concepts are highly valuable in most metadata models, with things like inheritance (even multiple inheritance, children able to override an inherited value, etc.) playing a valuable role. Keeping all the knowledge you have about a thing in one place and using inheritance to eliminate all redundancy from the expression of that knowledge is incredibly valuable, and has none of the makes-things-harder side effects you suffer when the object-orientation is hard-coded in a language. In the case of simulation, for example, the ultimate solution is optimization – getting to optimization from object-oriented simulation is a loooong path, and the OOP hard-coding will most likely prevent you from even making progress, much less getting there.

Conclusion

Any reasonable programmer should be familiar with the concepts of encapsulation, inheritance and the other features of object-oriented languages. Any reasonable programmer can use those concepts and implement them to the extent that it makes sense, using any powerful procedural language, whether in the program itself or (usually better) the associated metadata. But to enforce that all programs be written exclusively according to those concepts by embedding the concepts in the programming language itself is insanity. It's as bad as requiring that people wear ice skates, all the time and every day, because ice skates help you move well on ice when you know how to use them. If everything were ice, maybe. But when you try to run a marathon or even climb a hill with ice skates on, maybe you can do it, but everyone knows that trading the skates for running shoes or hiking boots would be better. Except in the esoteric world of software, where Experts with blinders on declare that ice skates are the universal best solution.

Posted by David B. Black on 03/08/2022 at 09:56 AM | Permalink | Comments (0)

Hurray Up! It's Almost Two Late Two Celebrate Two's Day

The magnificent, century's-rare Two's Day itself has already come and gone. But while it's still fairly large in the rear-view mirror, there is still time to celebrate the nine day wrapper around the day (and hour and minute) of Two's Day itself. Because we're still early in the amazing nine day long celebration of Palindromic Two's Week.

Today is February 23, 2022. It's a pretty two-y day, right? Particularly when you toss out boring and easily-misspelled "February" and replace it with a nice proper two, as in 2/23/2022.

We all know we're in the middle of the 2000's, I trust. Just as the Y2K bug happened because nearly everyone left off the repetitive and largely useless 19 from dates in that fast-fading-into-the-past century, most people leave off the ubiquitous 20 from the year now. What does that make today? Let me spell it out for you:

2/23/22

Oh, boy. I can tell from your silence that you don't get it yet. Let me make it easier:

2 2 3 2 2

It's a PALINDROME!

OK, let me spell it out for you. Literally. Here are a couple of examples. Hint: try reading each word or phrase backwards and see if you notice something.

racecar

repaper

top spot

never odd or even

Now go back to today's date. Not quite as cool as yesterday:

2 2 2 2 2

But still awesome in its own way. Let's reach back into the past and go fearlessly forward in the land of dates (but not nuts ... uh, OK, maybe numerical nuts...):

2 2 0 2 2

2 2 1 2 2

2 2 2 2 2

2 2 3 2 2

2 2 4 2 2

2 2 5 2 2

2 2 6 2 2

2 2 7 2 2

2 2 8 2 2

Before the palindromic nine day celebration, there was the pathetic:

2 1 9 2 2

and then there will be the huge let-down of

3 0 1 2 2

Yet another disappointment is that the perverse designers of the month system, in addition to making the spelling of Feb-ru-ary weird, gave it a pathetic 28 days most of the time, dissing it yet again.

All I can say is, let's CELEBRATE the nine days of Palindromic Two's Week while we're in it!

 

Posted by David B. Black on 02/23/2022 at 09:38 AM | Permalink | Comments (0)

The Experts are Clear: Keep your Cholesterol Low

Everyone knows it’s important to maintain a healthy diet, things like avoiding fatty meat and fish and whole-fat dairy products. All the experts tell us it’s so, and the nutrition guides on food products help us choose food wisely. Everyone knows what “fat” is. Most of us have also heard of “cholesterol,” but it’s not so clear just what that is. It gets clear when you visit a doctor, have your blood tested, and hear the doctor tell you that your cholesterol levels dangerously high. The doctor says you’ve got to get your cholesterol under control, or else your odds of getting heart disease and dying early go way up.

The doctor will probably tell you that you can help yourself by eating less saturated fat, which causes cholesterol to rise. Depending on how high your numbers are, the doctor may also put you on statin drugs, which lower your cholesterol levels the same way other drugs help lower dangerously high blood pressure. It’s just something you have to do in order to lead a long and healthy life. Are you ready for an incapacitating heart attack, or are you going take a couple pills every day? Is that so bad?

The CDC

Let’s make sure this is really true. Let's go to the federal CDC, the Center for Disease Control and Prevention.

CDC

Hey, they've got a whole section on cholesterol! Fortunately the CDC makes clear that it’s a myth that all cholesterol is bad for you. There’s HDL, which is good for you. And then there’s…

LDL (low-density lipoprotein), sometimes called “bad” cholesterol, makes up most of your body’s cholesterol. High levels of LDL cholesterol raise your risk for heart disease and stroke.

They go on to explain exactly why LDL is bad for you:

When your body has too much LDL cholesterol, it can build up in the walls of your blood vessels. This buildup is called plaque. As your blood vessels build up plaque over time, the insides of the vessels narrow. This narrowing can restrict and eventually block blood flow to and from your heart and other organs. When blood flow to the heart is blocked, it can cause angina (chest pain) or a heart attack.

There is something you can do with your diet to help things:

Saturated fats can make your cholesterol numbers higher, so it’s best to choose foods that are lower in saturated fats. Foods made from animals, including red meat, butter, and cheese, have a lot of saturated fats.

But then, in the end, the important thing is to avoid getting a heart attack or stroke. The good news is that that there are drugs to help:

Although many people can achieve good cholesterol levels by making healthy food choices and getting enough physical activity, some people may also need medicines called statins to lower their cholesterol levels.

Department of Health and Human Services (HHS)

Is the government united in the effort to reduce bad cholesterol. Let’s make another check, to the appropriately named Department of Health (HHS).

Apparently the whole world, according to WHO, is sure that heart disease is a huge killer:

Cardiovascular diseases—all diseases that affect the heart or blood vessels—are the number one cause of death globally, according to the World Health Organization (WHO).

They’re also sure that, in addition to diet, cholesterol has a firm place on the list of heart-harming things:

Your health care provider can assess your risk for cardiovascular disease through preventative screenings, including weight, cholesterol, triglycerides, blood pressure, and blood sugar.

The American Heart Association (AHA)

How about the professional organization of heart doctors – what’s their position on cholesterol? It’s pretty clear:

LDL cholesterol is considered the “bad” cholesterol, because it contributes to fatty buildups in arteries (atherosclerosis). This narrows the arteries and increases the risk for heart attack, stroke and peripheral artery disease (PAD).

Harvard Medical School

Better check with the people who train the best doctors. Let's make sure this is really up to date.

Harvard

Here's what they have to say:

Too much LDL in the bloodstream helps create the harmful cholesterol-filled plaques that grow inside arteries. Such plaques are responsible for angina (chest pain with exertion or stress), heart attacks, and most types of stroke.

What causes a person's LDL level to be high? Most of the time diet is the key culprit. Eating foods rich in saturated fats, trans fats, and easily digested carbohydrates boost LDL

OK, but what if for various reasons diet doesn't get things under control?

Several types of medication, notably the family of drugs known as statins, can powerfully lower LDL. Depending on your cardiovascular health, your doctor may recommend taking a statin.

Conclusion

The science has spoken. The leading authorities in the field of heart health speak it clearly, without reservation and without qualification. Heart attacks are a leading cause of death everywhere.  Blood plaques cause heart attacks. Blood plaques are caused by having too much LDL, the bad cholesterol, in the blood. Your LDL is raised by eating too much saturated fat. You can reduce your chances of getting a heart attack by strictly limiting the amount of saturated fat you eat and by taking drugs, primarily statins, that reduce the amount of LDL.

Why wouldn’t any sane person at minimum switch to low-fat dairy and lean meats, if not go altogether vegan? And then, to be sure, get their blood checked to make sure their LDL level is under control.  The only one who can keep you healthy is YOU, blankity-blank-it! And if you by chance run into some crank telling you otherwise, you shouldn’t waste your time.

Posted by David B. Black on 02/21/2022 at 01:56 PM | Permalink | Comments (0)

Medicine as a Business: Medical Testing 6: Another Test

When you have a tumor that's supposed to be vanquished by radiation therapy but refuses to go away, you're supposed to check on it periodically to see if it's resumed rapid-growth mode. While experience hasn't made my heart grow fond of MRI's, I reluctantly decided to give it another go, since I still have lumps I shouldn't have..

Here's what happened last time. I'm reluctant to dive into MRI-world again because even simple medical scheduling like for a covid test is a big problem -- but small compared to the nightmare of scheduling something like an MRI. Why don't I just go somewhere else where it's done well? Hah! Fat chance. And even then the burden would be on me to pry what are supposedly MY records from the iron grip of the multiple EMR's of my current system.

This time was an adventure -- a new kind of screw-up!

Scheduling the test

You would like to think that a doctor would keep on top of his/her patients and notify them when they're supposed to do something. Like my vet does for my cat! The rhetoric is that they do. It's possible that some of them do -- though how they manage it when having to spend nearly half their time entering an ever-growing amount of stuff into EMR's that are supposed to make things better is a testament to dedication and likely early burn-out.

The burden was on me to remember to schedule this standard-protocol follow-on test for my cancer. Clearly the big-institution medical center wasn’t up to the job. Neither was my insurance company, which is glad to pepper me with reminders to get my blood pressure tested by a doctor, something I regularly do myself at home. Test for cancer? It’s beyond them. A straightforward workflow software system would handle it all automatically.

I was supposed to get the next test a year after the prior one. I let it slip. No one reached out to me, of course. It's now nearly two years. Sigh.

I reached out via email on Nov 16, because I know calling is pointless. After many interactions on Dec 22 I was told I have an appointment -- ignoring of course my request to make it myself. At least I got it and could arrange things to be there on January 8.

Two days before the appointment I got a brief reminder voicemail and an email. The email didn't happen to mention a time or place. I guess they trusted me to know -- unlike any normal scheduling reminder system. But it did give me a ton of words about covid and safety, and requested that I spend time filling out forms online, which I did. Including uploading my driver's license and insurance card.

Taking the test

I arrived on time. After 45 minutes of claustrophobic rigid motionlessness to assure a good quality MRI while being bombarded by loud noises, the tech stopped things and asked me about my tumor and its location, which is under my shoulder blade. His reply: "We have to stop the MRI test. I'm following the test order, but I just looked at the prior scans and they're different! This order says "shoulder," which means around the joint. What past scans did was scapula, including all the way to near the backbone. This machine can't capture that. We'll have to restart you with the other machine here that can."

There was more conversation, all polite on my side, since the tech took initiative and was saving me from thinking everything was fine and having to come back to get the scan done correctly.

Even better, the facility wasn't busy, and the tech took the initiative to get me scanned at the correct machine. I was delayed by an hour and had extra practice at remaining immobile under aural bombardment, but OK. I warmly thanked both techs for their initiative and flexibility and went on my way.

Simply copying and sending in the same order as before was apparently beyond the esteemed radiation center director and/or his staff. I guess I should have gone elsewhere after the time I was in for an appointment after a scan had been done and he carefully examined … the wrong shoulder blade. And then only changing after the second time I politely mentioned he was on the wrong side.

Seeking the test results

What you're supposed to do is make a follow-up appointment with the director of the radiation oncology center to get your results. As in the past, I want to see the results myself. I have previously made an account on the system's patient access portal to do this. I entered the login information and got told this:

11

Less than a year after my prior access, they de-activated me. Do banks inactivate accounts for lack of use? How about email accounts? Or anything else? Exactly what horrible consequence is being averted by prompt de-activation? Right.

I read through all the material. Only by downloading a PDF file was I able to get the phone number I had to call, which was the only path back to activation. I called and after much of the usual nonsense I got through to a person who, after learning everything about me except my favorite flavor of ice cream gave me a code to enable me to enter a new miraculously complex password and have access to ... my own data, blankity-blank it!

The Surprise Appointment

Remember when I asked to make my own MRI test appointment so I could be sure it was at a time I could make? And one was made on my behalf? Imagine making a reservation at a restaurant and they TELL YOU when the appointment is that you may have -- because we're nice; after all, we don't have to let you come, so we'll fit you in when it suits us. This is what the MRI appointment was like.

Now, having logged into MyChart -- finally -- I discovered I had an appointment to see the director of radiation oncology! Surprise! When were you going to tell me, guys? I was nice and called the number, wove my way through the phone maze and found someone who claimed he would "tell the director." Not cancel the appointment; tell the director. Is some form of after-school detention coming my way to punish me for this refusal of an appointment? We'll see.

Why are they so insistent in me having an appointment with the director to "go over my results" with me? Simple: they want to be able to generate a claim for a visit.

Trying to get my test results

Given that they made an appointment for me to see the doctor two days after the test, it’s a fair assumption that the test results have been filed. I’ve been on the system’s patient access system and the radiation center’s separate (of course) system every day. They clearly have the results. They refuse to let me have them.

Refusing to provide patients timely access to their test results should be a crime. Why? In the most basic way, they are my property. Suppose I go to a tailor and get measured for a custom suit. I pay for the suit. Then the tailor refuses to give me the suit, and ignores my requests. If you go back to the tailor shop, the tailor says “I don’t deliver the suits. I just take measurements, make a suit, and give it to my team.” How do I get my suit then? “Go to MyTailor.com, sign in and it will be there.” What if it’s not? “Sorry, it’s not under my control.” Is the tailor shop committing a crime, taking money and refusing to deliver what was paid for? Of course! But in the wonderful world of medical business, this is standard practice.

Beyond the crime issue, sometimes those test results are health issues that the patients can be incredibly anxious about! Like me. I’m writing this making liberal use of my right arm and fingers, which the cancer could kill. Could it be worse? Yes. But am I anxious to see those results? You betcha!

All the rhetoric is that patients have the "right" to have full access to their own records. Wonderful modern medical record systems crow about how they support this full access. Lies. Blatant and pernicious. And no one does anything about it! Not only isn't it a scandal, it isn't even news.

Read here about the break-through in hospital EMR electronic data exchange. Read here, here, here and here about prior adventures on the same subject. Summary: Compared to past experiences, this was pretty good!

Getting the pre-auth

I’d kind of like to see the actual pre-auth so I can see the test order, a thing that the insurance company should have denied because it was wrong. I went to their website, which is of course down.

Anthem

They say I can get what I need from their wonderful app Sydney, but it doesn’t have the information. Of course. Forget it. I have better things to do. I already know that the armies of highly paid IT professionals at Anthem can't build software, so beating a dead horse..

Getting my test results

After finally gaining access to MyMountSinai I log in. Of course the test isn't there. Given that they made an appointment for me to see the doctor two days after the test, I'm pretty sure they have it. They're just taking their sweet time to let me see it. Because patient satisfaction is important to them, you know.

I check the next day. The next. Next. Next. A couple more. I finally email the doctor who ordered the test, politely asking if he would send it to me. A couple days later I got an email from Mount Sinai:

Capture

Amazing! The results for "David A. Black" are in! I wonder who that is? A long-lost relative? I'm David B. Black. No wonder matching patient health records is a problem.

I carefully read through the report. Here's the punch line:

Capture

"No convincing evidence of progression..." is definitely "appreciated" by me! While I'd much rather that it was gone, sullenly sitting in my body not growing I'll gladly take.

The doctor later responded to my email saying he would forward the test results, which arrived. The substance was the same, but Mt Sinai had gone to the trouble to omit lots of information from the report released to me officially and the one forwarded from their own internal system to me. For example, the name of the doctor who wrote the report. Instead of simply copying the information they have to enable me to access it, they've taken trouble to create software to pick and choose exactly which -- of my possessions! -- they will deign to allow me to have. When they feel like it.

Conclusion

In the overall scheme of things, everything I experienced was small potatoes. I'm healthy and alive. This doesn't come close to being in the ball park of the deaths and serious issues resulting from medical error and the costly, health-harming impact of standard medical practices that have been proven to be wrong, but which the authorities refuse to change because it would mean admitting error.

My experience is nonetheless a good example of the business-as-usual gross inefficiencies of the medical system that drive up costs, cause endless patient trouble and generally make things far worse than they should be. This isn't about exotic new biomedical discoveries. It's about things that should be plain, ordinary common-sense processes and software of the kind widely used in fields like veterinary medicine that should be the standard in human medicine. But aren't. One is tempted to think in terms of self-absorbed heads-in-the-clouds elites, but all I've got is mountains of mountains of anecdotal evidence, no serious, RCT's (random controlled tests, the gold standard of medical studies) in favor of that hypothesis, so I'll just put it aside.

Posted by David B. Black on 02/15/2022 at 09:15 AM | Permalink | Comments (0)

Two’s-day February 22 2022 an EXTREMELY Rare Day

What will happen on Tuesday, February 22, 2022 is something remarkably rare in history. 2/22/2022 has SIX two’s and a zero. The recently-passed 2/2/2022 was also pretty amazing, well worth making a big deal out of were it not for its grander cousin following just 20 days later. 2/2/2022 has only FIVE two’s and even worse, it fell on that ignominious day of the week Wednesday.

Wednesday is a terrible day. It’s the low point of the week, just as far from the last weekend as it is to the next one. It’s a contradiction in terms, a Wednesday trying to pass itself off as a Two’s-day. And the spelling! It’s pronounced “Wen’s day.” So why in the world does it spell itself “Wed ness day?” To trip up third graders on spelling tests?

You think it’s not such a rare thing? How about Feb 22, 1922, you might say. Well, here’s the dirt:

2/22/1922 had only 5 two’s.

The upcoming AMAZING day not only has 6 of those cool two’s, but the only non-two on the day is “nothing” to be worried about. Literally zero.

2/22/1922 had LOTS of non-two’s to upset anyone looking for beauty and consistency. There was a one. And, even worse, a nine. There is no rational come-back to those glaring errors.

If you’re still hanging on to the imagined glory of 2/22/1922, consider this: it was a Wednesday! How can a legitimate Two’s-day fall on a Wed-nes-day, I ask you?

You might dig in your heels and say “that’s fine about the past. But what about the future? Isn’t it obvious that 2/22/2222, a date that’s only 200 years from now would be even better?  That’s seven two’s! Beat you!”

Ummmm, no you didn’t. Yes, it’s got an extra two. Good for it. But it doesn’t fall on a Tuesday. And to answer your desperate comeback, neither does 2/2/2222.

The glorious Two’s-day, 2/22/2022

When should we celebrate? On the day, of course, but I mean exactly when?

Think about it. Tick, tick, tick.

The fireworks should go off and the bottles should be popped exactly when the 24 hour clock hits twenty-two seconds of twenty-two minutes of ten o’clock in the evening, in other words:

22:22:22

At that exact second it will be:

22:22:22 on Tuesday, 2/22/2022

Two's-Day!!!

What could possibly be better than that??

Get ready, folks. Make your preparations. This isn’t just a once-in-a-lifetime event, it’s a once-practically-EVER event!

Here's how one teacher is helping her elementary school class celebrate:

Twos-day-683x1024

 

Posted by David B. Black on 02/06/2022 at 04:59 PM | Permalink | Comments (0)

The Experts are clear: Don’t Eat Much Saturated Fat

Any reasonably aware person knows that it’s important to maintain a healthy diet. High on the list of what “healthy eating” means is limiting the amount of saturated fat in your diet. This impacts all the meat and dairy products you consume. You should only drink reduced-fat milk for example. If you must eat meat, make sure it’s lean, and never eat something obviously fatty like bacon. This isn’t just something experts say at their conferences. It’s the official recommendation of all government bodies, and brought to the attention of ordinary people by nutrition labels on food products. Warning: there are contrarian views on this subject.

Cheese

Here’s a nice goat cheese I bought:

Goat cheese front

When you turn it over, here’s most of the nutrition label:

Goat cheese back

Wow, calories must be important – they’re first and in big type. Right after calories comes Fat.  It must be really important, because I’m told not just how much fat there is, but how much of the fat I’m allowed to eat a day is in each serving.

This is interesting. There’s 6 grams of Total Fat, which is only 8% of my daily allowance, but 4 grams of the Fat is Saturated Fat, 2/3 of the total, and that’s 20% of my daily allowance. Couldn’t be clearer: I can eat a fair amount of fat, but I’d better make sure that only a tiny part of it is Saturated. Doing the arithmetic, they only want me to eat 20 grams of Saturated fat, while I’m allowed 76 grams of Total Fat.

I wonder if I’m getting this right, because some of those labels seem like things you should get lots of, like vitamins and potassium. I’d better check.

FDA

Oh, good, the FDA’s food label page  links right to a whole initiative they sponsor, the Healthy People initiative! How great is that, they’re concentrating on the big picture, keeping us all healthy. What a great government we have!

Here’s what they have to say about diet at a high level:

Healthy diet

Pretty clear, huh? Just like I said above: eat only lean meat, and low fat dairy. Saturated fats are bad for you. Everyone knows it. The importance is so great, it’s on the label of nearly every food product.

American Heart Association (AHA)

Let’s admit it, though, sometimes the government lags behind the latest science. Let’s make sure that’s not the case here.

What about the major medical organization that concentrates on heart, the American Heart Association? Their position seems very clear:

Heart

They sound pretty sure about themselves. Why are they so certain? Here's what they say as of November 2021: "Decades of sound science has proven it can raise your 'bad' cholesterol and put you at higher risk of heart disease."

OK, there are decades of science backing them up. Still, it's pretty broad, talking about not eating "too much" saturated fat. Do they have something more specific to say? Here it is:

AHA

Hmm, how does that relate to the FDA's food label? On the cheese label above, the Saturated Fat was 4g, which is 20% of the recommended total. Arithmetic: if 4g is 20%, then 20g is the limit imposed by the FDA, which is almost 50% more than the professional organization of medical cardiologists recommends! I thought our government was looking out for our health -- the FDA should get with it!

Harvard

Hold on here, let's not jump to conclusions. Let's check in with that incredibly prestigious medical school, Harvard Medical School?

Here’s what they have to say in an article from November 2021:

Aa

Isn't it wonderful that they make it clear that it isn't just bacon and fatty cheese we need to be careful about? Reading a bit further,

Capture

Higher than the AHA, but lower than the FDA. I guess they don't all read the same scientific studies, or something. But at least they all agree that Saturated Fat is bad for you. Reading a bit farther in the article, they say plainly that eating too much Saturated Fat "can raise the amount of harmful LDL cholesterol in your blood. That, in turn, promotes the buildup of fatty plaque inside arteries — the process that underlies most heart disease."

Couldn’t be clearer.

Mayo Clinic

Just to be absolutely, double-plus positive, maybe it's worth checking one of the best hospital medical systems in the world, the Mayo Clinic. They're doctors, after all, not researchers or institutional employees. Let's see what they say. OMG! Look at what I found in the section on nutrition myths!

Eating fat will make you fat. The fat-free and low-fat diet trend is a thing of the past (80s and 90s, to be exact). Yet, some individuals are still scared of fat.

Isn't that what all this focus on fat avoidance is all about? Let's read on:

Be aware that fats aren’t created equal. Choose heart-healthy unsaturated fats, such as olive and canola oil, nuts, nut butters and avocados over those that are high in saturated and trans fats, including fatty meats and high-fat dairy products.

Now I get it. The FDA nutrition food label had a high limit for fats in general (which are OK), but a low limit for saturated fats, the bad kind. So the Mayo Clinic is on board too. All the experts agree!

Conclusion

There are crazy people out there who ignore the clear message of the government, the Experts and leading authorities in the field of health and nutrition. Some of these crazy people even write books, the obvious intent of which is to make more of the population lead crappier lives and die sooner. Here's a brief summary. Why the FDA, the agency supposedly charged with keeping us healthy, permits these health-destroying, misinformation-filled books to be published, I have no idea.

Regardless of the distractions: government and the big authorities in the field are united in the effort to keep us all more healthy by encouraging us all to strictly limit the amount of Saturated Fat we eat.

Posted by David B. Black on 02/01/2022 at 09:48 AM | Permalink | Comments (0)

Object-Oriented Software Languages: The Experts Speak

On the subject of Object-Oriented Programming (OOP), there are capital-E Experts, most of academia and the mainstream institutions, and there are small-e experts, which include people with amazing credentials and accomplishments. They give remarkably contrasting views on the subject of OOP. Follow the links for an overview, analysis and humor on the subject.

The Exalted Experts on OOP

Here is the start of the description of Brown's intro course to Computer Science, making it clear that "object-oriented design and programming" is the foundational programming method, and Java the best representation language:

Brown intro

Here's their description of OOP, making it clear that there are other ways to program, specifically the nearly-useless functional style, never used in serious production systems.

Brown OOP

See below to see what Dr. Alan Kay has to say about Java.

Here is what the major recruiting agency Robert Half has to say on the subject:

Object-oriented programming is such a fundamental part of software development that it’s hard to remember a time when people used any other approach. However, when objected-oriented programming, or OOP, first appeared in the 1980s, it was a radical leap forward from the traditional top-down method.

These days, most major software development is performed using OOP. Thanks to the widespread use of languages like Java and C++, you can’t develop software for mobile unless you understand the object-oriented approach. The same goes for web development, given the popularity of OOP languages like Python, PHP and Ruby.

It's clear: OOP IS modern programming. Except maybe some people who like functional languages.

The mere experts on OOP

We get some wonderful little-e expert witness from here.

“Implementation inheritance causes the same intertwining and brittleness that have been observed when goto statements are overused. As a result, OO systems often suffer from complexity and lack of reuse.” – John Ousterhout Scripting, IEEE Computer, March 1998

“Sometimes, the elegant implementation is just a function. Not a method. Not a class. Not a framework. Just a function.” – John Carmack

OO is the “structured programming” snake oil of the 90' Useful at times, but hardly the “end all” programing paradigm some like to make out of it.

And, at least in it’s most popular forms, it’s can be extremely harmful and dramatically increase complexity.

Inheritance is more trouble than it’s worth. Under the doubtful disguise of the holy “code reuse” an insane amount of gratuitous complexity is added to our environment, which makes necessary industrial quantities of syntactical sugar to make the ensuing mess minimally manageable.

More little-e expert commentary from here.

Alan Kay (1997)
The Computer Revolution hasn’t happened yet
“I invented the term object-oriented, and I can tell you I did not have C++ in mind.” and “Java and C++ make you think that the new ideas are like the old ones. Java is the most distressing thing to happen to computing since MS-DOS.” (proof)


Paul Graham (2003)
The Hundred-Year Language
“Object-oriented programming offers a sustainable way to write spaghetti code.”


Richard Mansfield (2005)
Has OOP Failed?
“With OOP-inflected programming languages, computer software becomes more verbose, less readable, less descriptive, and harder to modify and maintain.”


Eric Raymond (2005)
The Art of UNIX Programming
“The OO design concept initially proved valuable in the design of graphics systems, graphical user interfaces, and certain kinds of simulation. To the surprise and gradual disillusionment of many, it has proven difficult to demonstrate significant benefits of OO outside those areas.”


Jeff Atwood (2007)
Your Code: OOP or POO?
“OO seems to bring at least as many problems to the table as it solves.”


Linus Torvalds (2007)
this email
“C++ is a horrible language. … C++ leads to really, really bad design choices. … In other words, the only way to do good, efficient, and system-level and portable C++ ends up to limit yourself to all the things that are basically available in C. And limiting your project to C means that people don’t screw that up, and also means that you get a lot of programmers that do actually understand low-level issues and don’t screw things up with any idiotic “object model” crap.”


Oscar Nierstrasz (2010)
Ten Things I Hate About Object-Oriented Programming
“OOP is about taming complexity through modeling, but we have not mastered this yet, possibly because we have difficulty distinguishing real and accidental complexity.”


Rich Hickey (2010)
SE Radio, Episode 158
“I think that large objected-oriented programs struggle with increasing complexity as you build this large object graph of mutable objects. You know, trying to understand and keep in your mind what will happen when you call a method and what will the side effects be.”


Eric Allman (2011)
Programming Isn’t Fun Any More
“I used to be enamored of object-oriented programming. I’m now finding myself leaning toward believing that it is a plot designed to destroy joy. The methodology looks clean and elegant at first, but when you actually get into real programs they rapidly turn into horrid messes.”


Joe Armstrong (2011)
Why OO Sucks
“Objects bind functions and data structures together in indivisible units. I think this is a fundamental error since functions and data structures belong in totally different worlds.”


Rob Pike (2012)
here
“Object-oriented programming, whose essence is nothing more than programming using data with associated behaviors, is a powerful idea. It truly is. But it’s not always the best idea. … Sometimes data is just data and functions are just functions.”


John Barker (2013)
All evidence points to OOP being bullshit
“What OOP introduces are abstractions that attempt to improve code sharing and security. In many ways, it is still essentially procedural code.”


Lawrence Krubner (2014)
Object Oriented Programming is an expensive disaster which must end
“We now know that OOP is an experiment that failed. It is time to move on. It is time that we, as a community, admit that this idea has failed us, and we must give up on it.”


Asaf Shelly (2015)
Flaws of Object Oriented Modeling
“Reading an object oriented code you can’t see the big picture and it is often impossible to review all the small functions that call the one function that you modified.”

Here is Wiki's take on issues with OOP. It goes into detail.

Here is Linus Torvald's take on object-oriented C++. Linus is merely the creator and leader of the open-source software that fuels the vast majority of the web.

More details:

Essay by Joe Armstrong. "After its introduction OOP became very popular (I will explain why later) and criticising OOP was rather like “swearing in church”. OOness became something that every respectable language just had to have."

A talk given at an OOP conference by an OOP supporter who lists 10 things he hates.

A Stanford guy telling his evolution to OOP and then out of it. Lots of detail.

A professional who gradually realized there were issues with objects.

I have therefore been moving away from the object-oriented development principles that have made up the bulk of my 17 year career to date. More and more I am beginning to feel that objects have been a diversion away from building concise, well structured and reusable software.

As I pondered on this topic, I realised that this isn’t a sudden switch in my thinking. The benefits of objects have been gradually declining over a long period of time.

A detailed explanation of how the noun-centricity of OO languages perverts everything. Here is an extended quote from the start of this brilliant essay:

All Java people love "use cases", so let's begin with a use case: namely, taking out the garbage. As in, "Johnny, take out that garbage! It's overflowing!"

If you're a normal, everyday, garden-variety, English-speaking person, and you're asked to describe the act of taking out the garbage, you probably think about it roughly along these lines:

  get the garbage bag from under the sink
carry it out to the garage
dump it in the garbage can
walk back inside
wash your hands
plop back down on the couch
resume playing your video game (or whatever you were doing)


Even if you don't think in English, you still probably still thought of a similar set of actions, except in your favorite language. Regardless of the language you chose, or the exact steps you took, taking out the garbage is a series of actions that terminates in the garbage being outside, and you being back inside, because of the actions you took.

Our thoughts are filled with brave, fierce, passionate actions: we live, we breathe, we walk, we talk, we laugh, we cry, we hope, we fear, we eat, we drink, we stop, we go, we take out the garbage. Above all else, we are free to do and to act. If we were all just rocks sitting in the sun, life might still be OK, but we wouldn't be free. Our freedom comes precisely from our ability to do things.

Of course our thoughts are also filled with nouns. We eat nouns, and buy nouns from the store, and we sit on nouns, and sleep on them. Nouns can fall on your head, creating a big noun on your noun. Nouns are things, and where would we be without things? But they're just things, that's all: the means to an end, or the ends themselves, or precious possessions, or names for the objects we observe around around us. There's a building. Here's a rock. Any child can point out the nouns. It's the changes happening to those nouns that make them interesting.

Change requires action. Action is what gives life its spice. Action even gives spices their spice! After all, they're not spicy until you eat them. Nouns may be everywhere, but life's constant change, and constant interest, is all in the verbs.

And of course in addition to verbs and nouns, we also have our adjectives, our prepositions, our pronouns, our articles, the inevitable conjunctions, the yummy expletives, and all the other lovely parts of speech that let us think and say interesting things. I think we can all agree that the parts of speech each play a role, and all of them are important. It would be a shame to lose any of them.

Wouldn't it be strange if we suddenly decided that we could no longer use verbs?

Let me tell you a story about a place that did exactly that...

The Kingdom of Nouns

In the Kingdom of Javaland, where King Java rules with a silicon fist, people aren't allowed to think the way you and I do. In Javaland, you see, nouns are very important, by order of the King himself. Nouns are the most important citizens in the Kingdom. They parade around looking distinguished in their showy finery, which is provided by the Adjectives, who are quite relieved at their lot in life. The Adjectives are nowhere near as high-class as the Nouns, but they consider themselves quite lucky that they weren't born Verbs.

Conclusion

No big surprise, the experts beat the Experts hands-down. But you'd never know if you go through typical Computer Science "education," absorb the way that object-orientation is the "dominant" paradigm of computing and read the job requirements that talk about how the hiring group is serious about their object-hood. Programmers who are serious about what they do and try to understand it soon see the lack of clothing on King Object and move on.

Posted by David B. Black on 01/24/2022 at 02:28 PM | Permalink | Comments (0)

Data Humor Book by Rupa Mahanti

There's a new book out about Data Humor.

41iGGcEISaL

If you like data, you will be amused by this book. If you feel tortured by data, join the crowd -- and read this book, it will relieve some of the stress. If you were wondering what nerd humor was all about, read this book -- better to learn about nerd humor by getting the giggles.

The author searched far and wide for data humor. She stumbled upon a blog that had some bits she thought were funny -- this blog, yes, the one you're reading now!

She contacted me to ask permission to quote me. After thinking hard about whether I should grant permission -- for about a microsecond -- I gave it. She asked me to check out a draft of the book. I guess she liked what I said because my quote went on the back cover and was the first of the quotes on Amazon.

"This is a brilliant book. The title says it's humorous. It's hilarious! But even more valuable is the sad-but-true insights it conveys about humans, lost and wandering in uncharted forests of data, anxious to escape."—David B. Black, Technology Partner, Oak HC/FT Partners

It's quite amazing how widely she searched for quotes, from places I never would have thought to look:

... book containing a collection of more than 400 funny and quirky quotes, puns, and punchlines related to data, big data, statistics, and data science, from different sources and a wide array of cultural figures, thought leaders and key influencers across the world- William Edwards Deming, Charles Wheelan, Brené Brown, David B. Black, Tim O’Reilly, Jill Dyché, Evan Levy, Gwen Thomas, George Mount, David Shenk, James Gleick, Jim Barksdale, Vincent Granville, Cathy O’Neill, Dale Carnegie, Martyn Richard Jones, Timo Elliott, Mark Twain, Phil Simon, Lewis Carroll, Oscar Wilde, Thomas H. Davenport, DJ Patil, Damian Mingle, Thomas C. Redman, Cassie Korykorv, Brent Dykes, Guy Bradshaw, Scott Taylor, Susan Walsh, Winston Churchill, Ronald Reagan, Arthur Conan Doyle, and many more.

Here's just one:

Data isn't information, any more than fifty tons of cement is a skyscraper.
Clifford Stoll (Stoll 1996)

Don't you need some light in your life? I promise, it's lighter in every way than fifty tons of concrete...

Posted by David B. Black on 01/18/2022 at 09:49 AM | Permalink | Comments (0)

Software NEVER needs to be “Maintained”

We maintain our cars, homes and devices. Heating and cooling systems need regular maintenance. So do our bodies! If we don’t care for our bodies properly, they break down! Software, by sharp contrast, never needs to be maintained. NEVER! Using the word “maintenance” to describe applying a “maintenance update” to software is beyond misleading. More accurate would be to say “a new version of the software that was crippled by a horrible design error that our perpetually broken quality processes failed to catch.” That’s not “maintenance.” It’s an urgent “factory recall” to fix a design error that infects every car (copy of the software) that was built using the flawed design.

Software is different than almost everything

Software is unlike nearly everything in our experience. It is literally invisible. Even “experts” have trouble understanding a given body of code, much less the vast continent of code it interacts with. Naturally, we apply real-world metaphors to give us a chance of understanding it. While sometimes helpful, the metaphors often prove to be seriously misleading, giving nearly everyone a deeply inaccurate view of the underlying invisible reality. The notion of “software maintenance” is a classic example. The flaw is similar to the words “software factory.”

Maintaining anything physical centers around either preventing or repairing things that break due to simple wear-and-tear or an adverse event. We change the oil in a car because it degrades with use. We change the filters in heating and cooling units because they get clogged up with the gunk from the air that passes through them. We sharpen knives that have dulled as a result of use. We maintain our homes and yards. It’s the physical world and things happen.

In the invisible, non-physical world of software, by contrast, a body of software is the same after years of use as it was the moment it was created. Nothing gets worn down. Nothing gets clogged. An inspection after years of heavy use would show that every single bit, every one and zero, was the same as it was when it was created. Of course there are memory crashes, hacker changes, etc. It’s not that software is impervious to being changed; it’s just that software is unchanged as a result of being used – unlike everything in the normal physical world, which one way or another, is changed by its environment – everything from clothes getting wrinkled or dirty from wear to seats being worn down by being sat upon.

The meaning of software maintenance

When a car is proven to have a design flaw, auto manufacturers are reluctant to ship everyone a new car in which the original design flaw has been corrected. Instead, they issue a recall notice to each affected owner, urging them to bring their car to the nearest dealership to have repair done to the car that corrects the design flaw. It’s inconvenient for the owner, but far less expensive for the manufacturer. With software, by contrast, all the software vendor has to do is make a corrected version of the software available for download and installation, the software equivalent of shipping everyone a new car! It’s no more expensive to “ship” hundreds of megabytes of “brand-new” code than it is a tiny bit. Such are the wonders of software.

Software factory recalls are part of everyday life. Software creators are maddeningly unable to create error-free software that is also cyber-secure. See this.

We’ve all become accustomed to the Three Stooges model of building software.

111

There are highly paid hordes of cossetted employees enjoying free lunches and lounging on bean bags on luxurious campuses, “hard at work” creating leading edge software whose only consistent feature is that it’s late, expensive and chock full of bugs and security flaws.

While the Three Stooges and their loyal armies of followers are busily at work creating standards, regulations and academic departments devoted to churning out well-indoctrinated new members of the Stooge brigades, rebels are quietly at work creating software that is needed to meet the needs of under-served customers, using tools and methods that … gasp! … actually work. What an idea!

The good news is that the rebels are often richly rewarded for their apostasy by customers who eagerly use the results of their work. It’s a good thing for the customers that the totalitarian masters of the Three Stooges software status quo are no better at enforcing their standards than they are at building software that, you know, works.

Posted by David B. Black on 01/10/2022 at 11:30 AM | Permalink | Comments (0)

Cryptocurrency: Money, Trust and Regulation Book

A book has been published about cryptocurrency that stands out from the many books available on the market: it's written by a person with experience and true expertise in financial markets, institutions and regulation both in government and the private sector, Oonagh McDonald. Disclosure: I was her technical advisor for the book. We connected as a result of my article in Forbes on Central Bank Digital Currencies.

61ACT87EQ3S._SX331_BO1 204 203 200_

Dr. McDonald's prior books are impressive because of her amazing perspective and knowledge. Here's her background:

Dr. Oonagh McDonald CBE is an international expert in financial regulation, having advised regulatory authorities in a wide range of countries, including Indonesia, Sri Lanka and Ukraine. She was formerly a British Member of Parliament, then a board member of the Financial Services Authority, the Investors Compensation Scheme, the General Insurance Standards Council, the Board for Actuarial Standards and the Gibraltar Financial Services Commission. She was also a director of Scottish Provident and the international board of Skandia Insurance Company and the British Portfolio Trust. She is currently Senior Adviser to Crito Capital LLC. She was awarded a CBE in 1998 for services to financial regulation and business. Her books include Fannie Mae and Freddie Mac: Turning the American Dream into a Nightmare (2013), Lehman Brothers: A Crisis of Value (2015) and Holding Bankers to Account (2019). She now lives in Washington DC, having been granted permanent residence on the grounds of "exceptional ability".

Read the comments at the link about her books on Lehman Brothers, Fannie Mae, bankers and markets and others.

Here are examples of what others have said:

Oonagh McDonald has done it again. In this ambitious book, she helps the rest of the world catch up with her on the opportunities and risks associated with stable coins. Even if one may disagree with her about the future of stable coins (and I do a bit), this book is an invaluable resource, especially as a teaching tool, because of McDonald’s ability to synthesize and interpret a vast amount of information about complex and novel practices. -- Charles Calomiris, Henry Kaufman Professor of Financial Institutions, Columbia Business School

McDonald’s rigorously researched analysis of the development of cryptocurrencies is a must-read for anyone who has a stake in the future of money. It is an historical tour de force that painstakingly teases out of every corner of the cryptocurrency world the critical issues that governments, policy makers, and consumers must consider before abandoning government fiat money. -- Thomas P. Vartanian, executive director and professor of law, Program on Financial Regulation and Technology, George Mason University

Everyone fascinated by how the cryptocurrency phenomenon has created a whole sector of ventures to furnish ‘alternative currencies’, while the dollar price of a Bitcoin boomed from 8 cents to a high of more than $60,000, must wonder whether all this will really bring about a revolution in the nature of money. Will Bitcoin’s libertarian dream to displace central bank fiat currency be achieved? Or ironically, will central banks take over digital currencies and make themselves even more dominant monetary monopolies than before? Oonagh McDonald, always a voice of financial reason, provides a thorough consideration of these questions and of cryptocurrency ideas and reality in general, with the intertwined issues of technology, regulation, trust, and government monetary power. This is a very insightful and instructive guide for the intrigued. -- Alex J. Pollock, Distinguished Senior Fellow Emeritus, R Street Institute, and former Principal Deputy Director, Office of Financial Research, US Treasury

It's a different perspective from the many books on the subject of cryptocurrencies that have been published. Whether or not you agree with her conclusions, you will read facts and perspective here that are not available elsewhere.

Posted by David B. Black on 01/04/2022 at 04:18 PM | Permalink | Comments (0)

The Nightmare of Covid Test Scheduling

Oh, you want to get a Covid test, do you? Little did you know that the clever people who do these things also give you endurance, patience and intelligence tests at the same time! Our wonderful healthcare people and helpful governments have somehow arranged a diverse number of ways to make you fill out varying forms in varying orders, only to find out that there are no available appointments.

Don’t you think the highly paid experts who created these services could have done something simple, like following the model of dimensional search used at little places like Amazon, travel sites and other places that care about customers? I guess that would have been too easy or something. And besides, medical scheduling in general is a nightmare, why should this be different?

Looking for a test: CVS

Search told me that my local CVS has testing. I clicked to the website of my local store. I clicked “schedule a test.” Although I had come from the local store, I guess the people who built covid testing didn’t manage to get the local site to pass on its location, so I entered my location again as requested.

Now I have to “answer a few questions” for no-cost testing. Eight questions. Then when I say yes to recent symptoms, 12 more questions plus the date my symptoms began. Then clicking that I was truthful.

Next, pick a test type, look at a map of local stores and see a list of dates starting today. I pick today. There’s a list of each store, with a button under each to “Check for available times.” Click on the first store. Here’s what appears:

There are no available times at this location for Tue., 12/21. Try searching for availability on another date.

Wow. I go up and pick the next day. Click. No times. Pick the next day. Click. No times.

CVS has pioneered a whole new way to help customers pick a time! You pick a date, pick a store, click and hope you get lucky. Then pick a different store and/or a different time and click again. And keep rolling until you hit the jackpot! Assuming there’s one there…

Since there was no end in sight, I tried something different.

Looking for a test: Walgreens

No questions first. Hooray! Just put in a location, pick a test type and see a list of locations. … Almost all of which had “No appointments available.” Let’s check out the one nearest to me, which said “Few appointments available.” I click. First I have to agree to lots of things. Now I have to enter my full patient information: name, gender, DOB, race, ethnicity, full address, phone and email. Then review and click that it’s correct.

Then, it’s the covid questions: my symptoms, contacts, medical conditions, pregnancy. Have I had vaccines? For each, which one and the date given. Have I tested positive in the past?

Now, after all that, I can pick an appointment. Back to that bait-and-switch first screen with test types and locations. I pick the location. Now a calendar shows up. Today’s date is highlighted. This message in red is below: “No time slots available within the selected date. Try a different date for more options.” The next 7 days are in normal type, beyond that they’re greyed out. Do any of them work? I try each day individually. They each give the same message! Why couldn’t you have told me that NO DATES WERE AVAILABLE!?!? Maybe even … BEFORE I filled all that stuff out??

Looking for a test: The state of NJ

Since I live in NJ, I get regular dispatches about how the state government cares about my health in general and covid in particular. So I went to the state site.

NJ covid test

Which it turns out is operated by a private company, Castlight.

Castlight

I put in my zip code. They list places that offer testing, one of which is the Walgreens I just tried. But I click on it anyway, and they link me to Walgreens testing … in a town 10 miles away instead of my town, which was explicitly the one I clicked on. Good job!

They got my hopes up by listing Quest Diagnostics, which has a location in my town. I answer a long list of questions and am told that I quality for a test! Hooray! But then …

Myquest

I have to sign up and provide loads of personal information before even knowing I can get a test. That’s it for Quest.

Looking for a test: The local county

Maybe my local county would have done it better? Let’s check it out.

I get a long list of testing places. How do I find one near me? After a few minutes of confusion, I discover that the sites are listed alphabetically! Now that’s helpful!

CVS of course is near the top, with a line per location. My town isn’t listed even though I already know that the local CVS claims they do tests. Crap.

Looking for a test: Digging deep

I found a private place, Solv, that claims to link you right to testing places. I tried. They had a clinic not too far from me. Clicked. I’m still on Solv, which is potentially good. After more clicking It turns out that no appointments were available today or tomorrow, the only choices. Gee, Solv, maybe in the next release of your software you could possibly only show choices that were actually, you know, available??

I finally tried a little pharmacy that is local and has remained independent. They offer tests. I clicked and got to a page dedicated to the pharmacy under a place I’d never heard of, Resil Health. Right away they list dates and times available. Just a few days out.

Gerards

I pick a date and enter the usual information on a clean form, but also my insurance information and a photo of the front & back of my card. Click. The time is no longer available! But at least picking another time was easy. I was disappointed that it was a couple days out. They sent an email with a calendar invite. I accepted. There was a link to reschedule. I tried it. To make a long story short, sometimes when I clicked reschedule the dates available changed, and earlier ones appeared. After some effort I snagged one the same day! Then I went. All I had to do was show my driver’s license – since they had everything else, neither I nor anyone at the pharmacy had to do paperwork – Resil health did it all, including the reporting.

It was a pain, but by far the best. Hooray for small-group entrepreneurs, getting a service up and running that makes things easier and better than any of the giant private companies and certainly any of the pathetic ever-so-helpful governments.

Looking for a test: Is it just me?

I had to wonder: is New Jersey particularly bad, as snotty New Yorkers like to joke about, or is it just the way things are? It turns out that, even in high-rise Manhattan, covid testing is tough. This article spells out the issues.

Mayor Bill de Blasio keeps telling New Yorkers frustrated with long waits and delayed results at privately-run COVID testing sites to use the city’s public options — but his administration’s incomplete and bulky websites make that exceedingly difficult.

It’s not just me.

Conclusion

I got my test. I’ll get the results soon. Let's hope getting those results is better than it often is in medicine. What’s the big deal? I’m only writing about it because it’s a representative story in the life-in-the-slow-lane of typical software development. It’s possible to write good software. Thankfully there are small groups of motivated programmers who ignore the mountain of Expert-sanctioned regulations, standards and processes that are supposed to produce good software. These software ninja’s have a different set of methods – ones that actually work! For example, in New York City:

The complaints echo the problems New Yorkers encountered when city officials first rolled out their vaccine appointment registration systems this spring — prompting one big-hearted New Yorker with computer skills to create TurboVax to workaround the mess.

“We don’t have a single source of truth for all testing sites in NYC,” tweeted the programmer, Huge Ma, who was endearingly dubbed ‘Vax Daddy’ by grateful Gothamites. “Tech can’t solve all problems but it shouldn’t itself be a problem on its own.”

One guy – but a guy who actually knows how to produce effective, working software in less time than the usual software bureaucracy would take to produce a first draft requirements document. This is one of the on-going stream of anomalies that demonstrate that a paradigm shift in software is long overdue.

Posted by David B. Black on 12/22/2021 at 03:01 PM | Permalink | Comments (0)

The Dimension of Automation Depth in Information Access

I have described the concept of automation depth, which goes through natural stages starting with the computer playing a completely supportive role to the person (the recorder stage) and ending with the robot stage in which the person plays a secondary role. I have illustrated these stages with a couple examples that illustrate the surprising pain and trouble of going from one stage to the next.

Unlike the progression of software applications from custom through parameterized to workbench, customers tend to resist moving to the next stage of automation for various reasons including the fear of loss of control and power.

Automation depth in Information Access

Each of the patterns of software evolution I've described are general in nature. I’ve tried to give examples to show how the principles are applied. In this section, I’ll show how the entire pattern played out in “information access,” which is the set of facilities for enabling people to find and use computer-based information for business decision making.

Built-in Reporting

“Recorder” is the first stage of the automation depth pattern of software evolution. In the case of information access, early programs were written to record the basic transactions that took place; as part of the recording operation, reports were typically produced, summarizing the operations just performed. For example, all the checks written and deposits made at a bank would be recorded during the day; then, at night, all the daily activity would be posted to the accounts. The posting program would perform all the updates and create reports. The reports would include the changes made, the new status of all the accounts, and whatever else was needed to run the bank.

At this initial stage, the program that does the recording also does the reporting. Reporting is usually thought to be an integral part of the recording process – you do it, and then report on what you did. Why would you have one program doing things, and a whole separate program figuring out and reporting on what the first program did? It makes no sense.

What if you need reports for different purposes? You enhance the core program and the associated reports. What if lots of people want the reports? You build (in the early days) or acquire (as the market matured) a report distribution system, to file the reports and provide them to authorized people as required.

Efficiency was a key consideration. The core transaction processing was “touching” the transactions and the master files; while it was doing this, it could be updating counters and adding to reports as it went along, so that you wouldn’t have to re-process the same data multiple times.

Report Writers

The “power tool” stage of automation depth had two major sub-stages. The first of these was the separation of reporting from transaction processing. Information access was now a key goal in itself, and was so important and done so frequently that specialized tools were built to make it easy, which is always the sign that you’re into the “power tool” phase.

This first generation of power tools were specialized software packages generally called “report writers.” The power tool was directed at the programmer who had to create the report. Originally, the language that was used for transaction processing was also used for generating the report. The most frequent such language was COBOL. The fact that COBOL was cumbersome for this purpose was reflected in the fact that specialized syntax was added to COBOL to ease the task of writing reports. But various clever people saw that by creating a whole new language and software environment, the process of writing reports could be tremendously enhanced and simplified. These people began to think in terms of reporting itself, so naturally they broke the problem into natural pieces: accessing the data you want to report on, processing it (select, sort, sum, etc.), and formatting it for output.

The result of this thinking was a whole industry that itself evolved over time, and played out in multiple environments and took multiple forms. The common denominator was that they were all software tools to enable programmers to produce reports more quickly and effectively than before, and were complete separate from the recorder or transaction processing function.

At the same time, data storage was evolving. The database management system emerged through several generations. This is not the place for that story, which is tangential to the automation depth of information access. What is relevant is that, as the industry generally recognized that information access had moved to the report writer stage of automation, effort was made to create a clean interface between data and the programs that accessed the data for various purposes.

Data Warehouse and OLAP

Report writers were (and are) important power tools – but they’re basically directed at programmers. But programmers are not the ultimate audience for most reports; most reports are for people charged with comprehending the business implications of what is on the report and taking appropriate action in response. And the business users proved to be perennially dissatisfied with the reports they were getting. There was too much information (making it hard to find the important things), not enough information, information organized in confusing ways (so that users would need to walk through multiple reports side-by-side), or information presented in boring ways that made it difficult to grasp the significance of what was on the page. And anytime you wanted something different, it was a big magilla – you’d have to get resources authorized, a programmer assigned, suffer through the work eventually getting done, and by then you’d have twice as many new things that needed getting done.

As a result of these problems, a second wave of power tools emerged, directed at this business user. These eventually were called OLAP tools. The business user (with varying levels of help from those annoying programmers) had his own power tool, giving him direct access to the information. Instead of static reports, you could click on something and find out more about it – right away! But with business users clicking, the underlying data management systems were getting killed, so before long the business users got their own copy of the data, a data warehouse system.

In a sign of things to come, the business users noticed that sometimes, they were just scanning the reports for items of significance, and that it wasn’t hard to spell out exactly what they cared about. So OLAP tools were enhanced to find and highlight items of special significance, for example sales regions where the latest sales trends were lower than projections by a certain margin. This evolved into a whole system of alerts.

Predictive Analytics

OLAP tools are certainly power tools, but the trouble with power tools is that you need power users – people who know the business, can learn to use a versatile tool like OLAP effectively, and can generate actions from the information that help the business. So information access advanced to the final stage in our general pattern, the “robot” stage, in which human decision making is replaced by an automated system. In information access, that stage is often called “predictive analytics,” which is a kind of math modeling.

As areas of business management are better understood, it usually turns out that predictive analytics can do a better, quicker job of analyzing the data, finding the patterns, and generating the actionable decisions than a person ever could. A good example is home mortgage lending, where the vast majority of the decisions today are made using predictive analytics. Many years ago, a person who wanted a home mortgage would make an appointment with a loan officer at a local savings bank and request the loan. The officer would look at your information and make a human judgment about your loan worthiness.

That “power user” system has long since been supplanted by the “robot” system of predictive analytics, where all the known data about any potential borrower is constantly tracked, and credit decisions about that person are made on the basis of the math whenever needed. No human judgment is involved, and in fact would only make the system worse.

Predictive analytics is the same in terms of information utilization as the prior stages, but the emphasis on presenting a powerful, flexible user interface to enable a power user to drive his way to information discovery is replaced by math models that are constantly tuned and updated by the new information that becomes available.

Sometimes the predictive analytics stage is held back because of a lack of vision or initiative on the part of the relevant industry leaders. However, a pre-condition for this approach really working is the availability of all the relevant data in suitable format. For example, while we tend to focus on the math for the automated mortgage loan processing, the math only works because it has access to a nationwide database containing everyone’s financial transactions over a period of many years. A power user with lots of experience, data and human judgment will beat any form of math with inadequate data; however, good math fueled with a comprehensive, relevant data set will beat the best human any time.

Conclusion

All these stages of automation co-exist today. One of the key rules of computing is that old programs rarely die; they just get layered on top of, given new names, and gradually fade into obscurity. There are still posting programs written in assembler language that have built-in reporting. In spite of years of market hype from the OLAP folks, report writing hasn’t gone away; in fact, some older report writers have interesting new interactive capabilities; OLAP and data warehouses are things that some organizations aspire to, while others couldn’t live without them; finally, there are important and growing pockets of business where the decisions are made by predictive analytics, and to produce pretty reports for decision-making purposes (as opposed to bragging about how well the predictive analytics are doing) would be malpractice.

Even though all these stages of automation co-exist in society as a whole, they rarely co-exist in a functional segment of business. Each stage of automation is much more powerful than the prior stage, and it provides tangible, overwhelming advantages to the groups that use it. Therefore, once a business function has advanced to use a new stage of information access automation, there is a “tipping point,” and it tends to become the new standard for doing things among organizations performing that function.

Posted by David B. Black on 12/13/2021 at 02:19 PM | Permalink | Comments (0)

Trusting Science: the Whole Milk Disaster

I trust science. The gradual emergence of science has led to a revolution in human existence that has happened so quickly and with such impact that it is hard to gain perspective on it.

Trusting science is not the same as trusting the pronouncements of people who are designated scientific experts. Establishing the truth of a scientific theory is an entirely different process than the social dynamics of rising to a position of leadership in a group of any kind. Official experts, whether government, corporate or academic, nearly always defend the current version of received truth against challenge of all kinds; most of those challenges are stupidity and ignorance. My go-to expert on this subject, as so many others, is Dilbert:

Dilbert expert

Sadly, those same establishment experts tend to be the strongest opponents of genuine innovation and scientific advances of all kinds. As I explain here, with examples from Feynman and the history of flight, one of the core elements of successful innovation is ignoring the official experts.

My skepticism is well proven in the case of so-called Computer Science, which doesn't even rise to the level of "useful computer practices" much less science. As I have shown extensively, Computer Science and Engineering is largely a collection of elaborate faith-based assertions without empirical foundation. And computers are all numbers and math! If it's so pathetic in such an objective field, imagine how bad it can get when complex biological systems are involved.

This brings us to the subject of saturated fat (solid fat of the kind that's in meat), whole milk and human nutrition. This ongoing scandal -- for which no one has been imprisoned, sued or even demoted -- in spite of its leading to widespread obesity and other health-damaging conditions -- is still rolling along. The hard science concerning the supposed connection between saturated fat, cholesterol and heart disease is in. The results are clear. It is positively healthy for people to eat saturated fat. Period. The scandal is that the "expert" people and organizations that have declared saturated fat and cholesterol to be dangerously unhealthy for many decades refuse to admit their errors and continue to waffle on the subject.

This is relevant to computer science because of the stark differences between the two fields. Software is esoteric and invisible to nearly everyone, while the results of eating are tangible to everyone, and the statistics about the effects are visible and measurable. The common factor is ... people. In both cases there is a wide consensus of expert opinion about the right way to build and main software, and the right way to eat and live in order to be healthy. Experts! From blood-letting to flying machines, they lead the way!

Usually the Science-challengers are wrong

It has taken me a great deal of time to dig in to this scandal, in part because there are so many cases of "the experts are all wrong -- me and my fringe group have the truth." I wanted to make absolutely sure "it's good to eat saturated fat" wasn't another of these. After all, the simple notion that eating fat makes you fat makes common sense!

An example of a harmfully bogus claim is the anti-vax movement, which has been supported by a number of famous people. The idea is that vaccinations in general and childhood vaccinations in particular have horrible consequences -- for example, causing autism in children. A study led by Dr. Andrew Wakefield was published in the British journal Lancet that claimed to prove the association. After years of growing fear and resistance to childhood MMR vaccines, the study was shown to by fatally flawed and corrupt, funded by trial attorneys who wanted to sue drug makers. Later claims that mercury-containing thimerosal in some vaccines continued to fuel the anti-vax cause. Also wrong. Here's a brief history.

The Scandal

Just as vaccinations are provably good things, surely the diet recommendations of the major medical and government institutions in favor of limiting fat must also be! Sadly, this is not the case. Rather, it's a wonderful example of how hard paradigm shifts are to accomplish, particularly when the prestige of major institutions are involved. And, sadly, how prestige and baseless assertions have substituted for science, shockingly similar to bloodletting and other universally-accepted-on-no-objective-basis practices.

A basic, understandable summary of the subject may be found in The Big Fat Surprise, which is loaded with appropriate detail. Here is a summary:

"the past sixty years of low-fat nutrition advice has amounted to a vast uncontrolled experiment on the entire population, with disastrous consequences for our health.

For decades, we have been told that the best possible diet involves cutting back on fat, especially saturated fat, and that if we are not getting healthier or thinner it must be because we are not trying hard enough. But what if the low-fat diet is itself the problem? What if those exact foods we’ve been denying ourselves — the creamy cheeses, the sizzling steaks — are themselves the key to reversing the epidemics of obesity, diabetes, and heart disease?"

Yes, this sounds like what an anti-science crank would say. All I can say is, dig in. You'll find the shoddy beginnings of the fat-cholesterol-heart hypothesis; the biased studies that seemed to support it; the massive, multi-decade Framingham study which was trumpeted as supporting the anti-fat theory, but whose thoroughly confirmed and vetted results were actively suppressed for many years; the uncontested studies that disprove the anti-fat recommendations; and the improved understanding of the biological systems that thoroughly debunks the widely promoted campaign against saturated fat and LDL, the "bad" cholesterol.

More detail

If you want a start on more detail, I recommend Dr. Sebastian Rushworth at a high level and the recent book by long-term cardiac doctor Malcolm Kendrick that gives the details of the studies and biology that explain what really happens.

Here are a couple explanations from Dr. Rushworth:

"the LDL hypothesis basically says that heart disease happens because LDL somewhow ends up in the arterial wall, after which it is oxidized, which starts an inflammatory reaction that gradually leads to the hardening of arteries and eventually to bad things like heart attacks and strokes."

"... the LDL hypothesis is bunk. There is by now a wealth of evidence showing that LDL has little to do with heart disease, such as this systematic review from BMJ Evidence Based Medicine, which showed that there is no correlation whatsoever between the amount of LDL lowering induced by statins and other LDL lowering drugs, and the benefit seen on cardiovascular disease risk (if indeed any benefit is seen – it often isn’t)."

Rushmore's summary of the Kendrick book is:

"The ultra-short elevator pitch version of what he argues in the book is that heart disease is what happens when damage to the arterial wall occurs at a faster rate than repair can happen. That’s why everything from sickle cell disease to diabetes to high blood pressure to smoking to rheumatoid arthritis to cortisone treatment to the cancer drug Avastin increases the risk of cardiovascular disease – they all either increase the speed at which the arterial wall gets damaged or slow down its repair. It’s why heart disease (more correctly called “cardiovascular disease”) only affects arteries (which are high pressure systems) and not veins (which are low pressure systems), and why atherosclerosis (the hardening of the arteries that characterizes heart disease) primarily happens at locations where blood flow is extra turbulent, such as at bifurcations.

This alternative to the LDL hypothesis is known as the “thrombogenic hypothesis” of heart disease. It’s actually been around for a long time, first having been proposed by German pathologist Carl von Rokitansky in the 19th century. Von Rokitansky noted that atherosclerotic plaques bear a remarkable similarity to blood clots when analyzed in a microscope, and proposed that they were in fact blood clots in various stages of repair.

Unfortunately, at the time, von Rokitansky wasn’t able to explain how blood clots ended up inside the artery wall, and so the hypothesis floundered for a century and a half (which is a little bit ironic when you consider that no-one knows how LDL ends up inside the artery wall either, yet that hasn’t hindered the LDL hypothesis from becoming the dominant explanation for how heart disease happens). We now know the mechanism by which this happens: cells formed in the bone marrow, known as “endothelial progenitor cells”, circulate in the blood stream and form a new layer of endothelium on top of any clots that form on the artery wall after damage – thus the clot is incorporated in to the arterial wall.

In spite of the fact that probably at least 99% of cardiologists still believe in the LDL hypothesis, the thrombogenic hypothesis is actually supported far better by all the available evidence. While the LDL hypothesis cannot explain why any of the risk factors listed above increases the risk of heart disease, the thrombogenic hypothesis easily explains all of them.

Conclusion

Many major institutions have dialed down their fervent condemnation of the low-fat and LDL-is-bad myths, but haven't done what they should do, which reverse their positions and mea culpa. They should at minimum take at least part responsibility for the explosion of obesity, useless pharma mega-dollars wasted, and the attendant health disasters for countless humans. The fact that they're not helps us understand the resistance to correction of the similarly powerful mainstream myths about software. It's not about the LDL or the software; it's about people, pride, institutions, bureaucracy and entrenched practices and beliefs that fight change.

 

Posted by David B. Black on 12/06/2021 at 02:24 PM | Permalink | Comments (0)

Computer Science and Kuhn's Structure of Scientific Revolutions

If bridges fell down at anywhere close to the rate that software systems break and become unavailable, there would be mass revolt. Drivers would demand that bridge engineers make radical changes and improvements in bridge design and building. If criminals took over bridges and held the vehicles until they paid a ransom anywhere close to the number of times criminals rob organizations or their data or lock their systems until a ransom is paid, there would be mass revolt. In the world of software, this indefensible state of affairs is what passes for normal! Isn't it time for change? Has something like this ever happened in other fields that we can learn from?

Yes. It's happened enough that it's been studied, and the process of resistance to change until the overwhelming force of a new paradigm breaks through.

Thomas Kuhn was the author of a highly influential book published in 1962 called The Structure of Scientific Revolutions. He introduced the term “paradigm shift,” which is now a general idiom. Examining the history of science, he found that there were abrupt breaks. There would be a universally accepted approach to a scientific field that was challenged and then replaced with a revolutionary new approach. He made it clear that a paradigm shift wasn’t an important new discovery or addition – it was a whole conceptual framework that first challenged and then replaced the incumbent. An example is Ptolemaic astronomy in which the planets and stars revolved around the earth, replaced after long resistance by the Copernican revolution.

Computer Science is an established framework that reigns supreme in academia, government and corporations, including Big Tech. There are clear signs that it is as ready for a revolution as the Ptolemaic earth-centric paradigm was. Many aspects of the new paradigm have been established and proven in practice. Following the pattern of all scientific revolutions, there is massive establishment resistance, led by a combination of ignoring the issues and denying the problems.

The Structure of Scientific Revolutions

Thomas Kuhn received degrees in physics, up to a PhD from Harvard in 1949. He was into serious stuff, with a thesis called “The Cohesive Energy of Monovalent Metals as a Function of Their Atomic Quantum Defects.” Then he began exploring. As Wiki summarizes:

As he states in the first few pages of the preface to the second edition of The Structure of Scientific Revolutions, his three years of total academic freedom as a Harvard Junior Fellow were crucial in allowing him to switch from physics to the history and philosophy of science. He later taught a course in the history of science at Harvard from 1948 until 1956, at the suggestion of university president James Conant.

Structure-of-scientific-revolutions-1st-ed-pb
His path for coming to his realization is fascinating. I recommend reading the book to anyone interested in how science works and the history of science.

After studying the history of science, he realized that it isn't just incremental progress.

Kuhn challenged the then prevailing view of progress in science in which scientific progress was viewed as "development-by-accumulation" of accepted facts and theories. Kuhn argued for an episodic model in which periods of conceptual continuity where there is cumulative progress, which Kuhn referred to as periods of "normal science", were interrupted by periods of revolutionary science. The discovery of "anomalies" during revolutions in science leads to new paradigms. New paradigms then ask new questions of old data, move beyond the mere "puzzle-solving" of the previous paradigm, change the rules of the game and the "map" directing new research.[1]

Real-life examples of this are fascinating. The example often given is the shift from "everything revolves around the earth" to "planets revolve around the sun." What's interesting here is the planetary predictions of the Ptolemaic method were quite accurate. The shift to Copernicus (Sun-centric) didn't increase accuracy, and the calculations grew even more complicated. The world was not convinced! Kepler made a huge step forward with elliptical orbits instead of circles with epicycles and got better results that made more sense. The scientific community was coming around. Then when Newton showed that Kepler's laws of motion could be derived from his core laws of motion and gravity the revolution won.

While the book doesn't emphasize this, it's worth pointing out that the Newtonian scientific paradigm "won" among a select group of numbers-oriented people. The public at large? No change.

Anomalies that drive change

One of the interesting things Kuhn describes are the factors that drive a paradigm shift in science -- anomalies, results that don't fit the existing theory. In most cases, anomalies are resolved within the paradigm and drive incremental change. When anomalies resist resolution, something else happens.

During the period of normal science, the failure of a result to conform to the paradigm is seen not as refuting the paradigm, but as the mistake of the researcher, contra Popper's falsifiability criterion. As anomalous results build up, science reaches a crisis, at which point a new paradigm, which subsumes the old results along with the anomalous results into one framework, is accepted. This is termed revolutionary science.

The strength of the existing paradigm is shown by the strong tendency to blame things on mistakes of the researcher -- or in the case of software, on failure to follow the proper procedures or to write the code well.

The Ruling Paradigm of Software and Computer Science

There is a reigning paradigm in software and Computer Science. As you would expect, the paradigm is almost never explicitly discussed. It has undergone some evolution over the last 50 years or so, but not as radical as some would have it.

At the beginning, computers were amazing new devices and people programmed them as best they could. Starting over 50 years ago, people began to notice that software took a long time and lots of effort to build and was frequently riddled with bugs. That's when the foundational aspects of the current paradigm were born and started to grow, continuing to this day:

  1. Languages should be designed and used to help programmers avoid making mistakes. Programs should be written in small pieces (objects, components, services, layers) that can be individually made bug-free.
  2. Best-in-class detailed procedures should be adapted from other fields to assure that the process from requirements through design, programming, quality assurance and release is standardized and delivers predictable results.

The ruling paradigm of software and computer science is embodied in textbooks, extensive highly detailed regulations, courses, certifications and an ever-evolving collection of organizational structures. Nearly everyone in the field unconsciously accepts it as reality.

Are There Anomalies that Threaten the Reigning Paradigm?

Yes. There are two kinds.

The first kind are the failures of delivery and quality that continue to plague by-the-book software development, in spite of decades of piling up the rules, regulations, methods and languages that are supposed to make software development reliable and predictable. The failures are mostly attributed to errors and omissions by the people doing the work -- if they had truly done things the right way, the problems would not have happened. At the same time, there is a regular flow of incremental "advances" in procedure and technology designed to prevent such problems. This is textbook Kuhn -- the defenders of the status quo attributing issues to human error.

The second kind of anomalies are bodies of new software that are created by small teams of people who ignore the universally taught and proscribed methods and get things done that teams 100's of times larger couldn't do. Things like this shouldn't be possible. Teams that ignore the rules should fail -- but instead most of the winning teams are ones that did things the "wrong" way. This is shown by the frequency of new software products being created by such rule-ignoring small groups, rocketing to success and then being bought by the rule-following organizations, including Big Tech, who can't do it -- in spite of their giant budgets and paradigm-conforming methods. See this and this.

When will this never-ending stream of paradigm-breaking anomalies make a paradigm-shifting revolution take place in Computer Science? There is no way of knowing. I don't see it taking place any time soon.

Conclusion

The good news about the resistance of the current consensus in Computer Science and software practice to a paradigm shift is that it provides the room for creative entrepreneurs to build new things that meet the unmet needs of the market  The entrepreneurs don't even have to go all-in on the new software paradigm! They just need to ignore enough of the bad old stuff and use enough of the good new stuff to get things done that the rule-followers are incapable of. Sadly, the good news doesn't apply to fields that are so outrageously highly regulated that the buyer insists on being able to audit compliance during the build process. Nonetheless, there is lots of open space for creative people to build and grow.

Posted by David B. Black on 11/29/2021 at 11:14 AM | Permalink | Comments (0)

Next »

Links

  • David B. Black's Forbes articles
  • David B. Black's LinkedIn profile
  • Oak HC/FT
  • David B. Black's Amazon author page

Recent Posts

  • How to Integrate AI and ML with Production Software
  • The Facts are Clear: Hypertension is not a Disease
  • Flowcharts and Workflow in Software
  • The Experts are Clear: Control your Blood Pressure
  • Cartoons and Video games evolved into Bitcoin and NFT’s
  • How to Improve Software Productivity and Quality: Schema Enhancements
  • The Goals of Software Architecture
  • Making Fun of Object-Orientation in Software Languages
  • The forbidden question: What caused the obesity epidemic?
  • How to Fix Software Development and Security: A Brief History

Categories

  • AI, Cognitive Computing (18)
  • Big Business (18)
  • Big Data (20)
  • Big Tech Companies (10)
  • Bitcoin Blockchain (29)
  • Books (14)
  • Business and Product Strategy (6)
  • Cloud (6)
  • Computer Fundamentals (26)
  • Computer history (32)
  • Computer Science (13)
  • Computer security (30)
  • Computer storage (21)
  • Counting, Numeracy (3)
  • CTO (19)
  • Customer Service (9)
  • Data Center (5)
  • Data Science (2)
  • Database (6)
  • Dictionary (4)
  • Digital Media (9)
  • Due Diligence (2)
  • Education (6)
  • Email (6)
  • Experts (12)
  • Facebook (9)
  • Fashion (18)
  • Financial Technology (11)
  • Forbes cross-posts (24)
  • Government (18)
  • Growing a winner (16)
  • Health Insurance (10)
  • Healthcare (30)
  • Healthcare business (16)
  • Healthcare EMR/EHR (15)
  • Heart Health (7)
  • Ingredients (2)
  • Innovation (31)
  • Internet (4)
  • Machine Learning (9)
  • Medical practice & training (4)
  • Microservices (3)
  • Microsoft (3)
  • Music (4)
  • Nerds (12)
  • Nutrition (8)
  • Oak HC/FT (12)
  • Oak Investment Partners (4)
  • Oak portfolio companies (11)
  • Object-oriented Software (5)
  • Occams Razor Occamality (7)
  • People (24)
  • Project Management (11)
  • Regulations (14)
  • Social Media (1)
  • Software Architecture (9)
  • Software business (2)
  • Software Components (2)
  • Software Development (47)
  • Software Documentation (2)
  • Software Evolution (30)
  • Software management (11)
  • Software Myth-Conceptions (6)
  • Software Programming Languages (24)
  • Software Quality (27)
  • Software Science (2)
  • Software Standards (1)
  • Software User Interface (3)
  • Twitter (4)
  • Warfare (7)
  • Workflow (1)
  • X-IO Storage (18)
See More
Subscribe to this blog's feed

About

  • The Black Liszt
  • Powered by TypePad