Start United States USA — software Three Approximations You Should Never Use When Coding

Three Approximations You Should Never Use When Coding

311
0
TEILEN

Three computer scientists look at 3 fundamental concepts to programming: computation and state machines, advanced state machines, and new programming models.
Let’s be friends:
Comment (10)
Join the DZone community and get the full member experience.
The path travelled by the software industry since its inception and the accomplishments in scale and scope are simply astonishing: from creating a network the size of the planet, indexing the world’s information, connecting billions of people, to driving a car or flying an aircraft, or even an application such as Excel which enables 500M people to program a computer… Turing would be amazed at what can be done with computers today and how many people use one every day, if not every minute.
In this context, innovations in software engineering have been running amok, from operating systems, runtimes, programming languages, frameworks, to methodologies, infrastructure, and operations.
However, even though “the software industry has been very successful,” Ivar Jacobson and Roly Stimson report that “under the surface, everything is not as beautiful: too many failed endeavors, quality in all areas is generally too low, costs are too high, speed is too low, etc.”
In particular, the evolution of programming paradigms has been slow, and we should not be afraid to say that it is currently holding back our industry. The emergence of modern languages such as Java and C#, their success, but also their unhealthy competition, have created a conceptual molasse that generations upon generations of developers cannot seem to escape. Even modern languages such as JavaScript are expected to align to their antiquated conceptual foundations from OOP to MVC.
Our goal in this article is to kick start an open discussion on the evolution of programming paradigms and offer a contribution founded on, perhaps, the most robust theory in Computer Science: TLA+.
The paper is divided into three sections:
Computation and State Machines.
An Advanced State Machine Model.
A New Programming Model.
If you have not read this article which provides an introduction to TLA+ from its author, Dr. Leslie Lamport, please do. Anything we could write in comparison is just scribbles.
In his book, Dr. Lamport uses the Die Hard problem to illustrate why “State Machines Provide a Framework for much of Computer Science.”
We have represented in Fig.1 the graph of possible states and actions that Dr. Lamport uses in his example. The state labels are encoded as follows: [jug_size][full,empty,partial][jug_size][full,empty,partial].
Fig.1 – The state machine of the Die-Hard Problem and its solution (path in green)
The solution to the problem can be found by traveling a specific path.
Dr. Lamport explains that “a computation is a sequence of steps, called a behavior,” and there are three common choices for what a “step” is, leading to three different kinds of behaviors:
Action Behavior
A step is an action, an action behavior is a sequence of actions.
State Behavior
A step is a pair of states, a state behavior is a sequence
s1 → s2 → s3 → · of states.
State-Action Behavior
A step is a triple where S are states and α an action
From the state machine from Fig.1, we can define an “action behavior” as a series of actions:
Fill Big.
Fill Small from Big.
Empty Small.
Fill small from Big.
Fill Big.
Fill Small from Big.
Incidentally, we just created a new “programming language” based on the “Die-Hard” state machine. We don’t expect it will become popular anytime soon, but in essence, that is how action-based programming languages are created: there is an underlying state machine from which actions are derived, which then can be composed any way we wish without paying much attention to the underlying state machine. Programming languages are “action based” since the vast majority of “states” in our programs are irrelevant.
Let’s go the other way and derive the underlying state machine from a code snippet, for instance, the factorial function (From Dr. Lamport’s article):
We can infer a representation of the corresponding (classical) state machine for each function:
Fig.2a – The State Machine of the Factorial Function
The behavior of the State Machine is straightforward, we initialize it with the values of i (input) and f (output) and the state machine will transition to the computing “state” until i is equal to 1, at that point f contains the expected result.
The same could be done for the recursive version. You will notice that the memory requirements are different as the state machine needs to be designed to keep at least one intermediate value (fi-1). You may also decide to keep all intermediate values to speed up future computation.
Fig.2b – The State Machine of the Factorial Function (Recursive)
In this first section, we have demonstrated that state machines can compute and conversely (action-based) computations can be represented as state machines, but for the most part, states of the state machine are redundant, merely the shadow of actions. In the factorial functions, the (control) states (computing, done, error) would not add any clarity to the implementation. That is why for the vast majority of programs we never bother making the states explicit, and there is no argument there.
There is, however, a problem with “action-Based” formalisms and programming languages. It can be illustrated with a simple observation using a glass of water.
In an action-based behavior, this system would be described by a state variable v (volume of water in the glass) and two actions: add water, drink.
On the other hand, the state-action behavior of the glass relies on three control states, and the corresponding next-actions:
full: drink.

Continue reading...