Wednesday, December 28, 2011

Energy, Part 1



As the world moves closer to fossil fuel depletion and global warming seems to be increasingly caused by carbon dioxide emissions, sources of green energy have become more and more important. But what are green energy technologies and how far have they gone towards adoption? What hold-ups are there in relinquishing fossil fuels? What can satisfy humanity's diverse and enormous requirements for energy? To even start to answer these questions, we need to know how energy is harvested, stored, and consumed.

Harvesting Energy

Since Einstein we have known that energy is contained in all matter, and his famous formula represents an upper limit on how much energy can be harvested. And in a few cases, we have determined how it can be extracted, with varying efficiencies. Let's look at a few. The first method of energy extraction is from an exothermic (heat-producing) chemical reaction. This method is used in internal combustion, and it releases heat, measured in Joules. For example, when combusted with oxygen the following fuels release this much energy in kiloJoules per gram:
  • Acetylene 11.8
  • Ethanol 27.3
  • Coal 17-21 (sub-bituminous) 29-33 (bituminous or anthracite)
  • Kerosene (Petroleum) 43.1-46.2
  • Methane 50.6
  • Gasoline 51.6
  • Hydrogen 120
The reason Hydrogen has so much promise is due to its clean-combustion: water is the only by-product of combustion with oxygen. The other fuels release CO2 in varying amounts per kiloJoule of heat produced: gas is the least at 1.2 moles of CO2 per megaJoule and coal is the most at 2.0 moles per megaJoule.

The conversion of harvested heat into electricity or mechanical motion is quite a different matter.

Most people know that internal combustion engines are really controlled explosions. The addition of heat to a gas causes a significant change in its density. This change is implied by the law of ideal gases:

PV = nRT

Here, P is the pressure, V is the volume, n is the amount of the gas, R is a constant, and T is the temperature.

From this, we can see that when you increase the temperature, while keeping the amount of gas and the volume it is contained in constant, then you must increase the pressure proportionally. A massive increase in heat and a chemical reaction can create an explosion, which is a massive increase in volume. This is what happens in an internal combustion engine: the gas explodes, the massive increase in pressure drives a piston, and the motion of the pistons drives a cam shaft.

In modern cars, this is used to propel the car forwards, as work.

This can also be used to move a rotor and generate electricity through electromagnetic induction. Many power plants work this way, including the Moss Landing power plant in California.

Chemical reactions, to generate heat and drive turbines, generate about 65% of the world's electricity requirements today.

Eoliennes a SloterdijkThe second method of energy extraction is the harvesting of kinetic energy from matter that is already moving, such as water and air. This energy is almost always harvested using a turbine.

Have you seen the wind machines dotting the hillside near you? Those are wind turbines that harvest electricity from the wind itself, with very little effect on the environment. This is commonly called wind power, and is a renewable energy source, powered indirectly by the sun with the process of convection, and the turning of the earth, through the Coriolis force. Wind power currently generates about 1% of world electricity requirements, though the cumulative output is growing exponentially, suggesting in 20 years that an 8-fold increase will occur.

Another kind of kinetic energy that can be harvested is moving water. This is commonly called hydroelectric power. I'm sure you have noticed that water runs downhill. Here gravity itself is harvested because of the inexorable tendency of water to find a common level. Water, usually stored in a reservoir (fed by rain or snow melt) is stopped up at a dam. Some of the water is allowed to flow into a river, and in between the reservoir and the river is a turbine that runs a generator. About 1.5% of world electricity requirements are generated using hydroelectric power.

What happens when you mix exothermic reactions with turbines? The heat produced by chemical reactions is often used to heat water to produce steam. The state change that occurs when water is converted to steam means an increase in volume by a factor of approximately 1700 times. The increase in pressure means steam can be used to drive a turbine to generate energy, in a technology known as steam turbines. This process is used in aircraft carriers to great effect: it can drive a steam catapult to propel planes from its deck, or it can be used to drive the main screws to propel the carrier through the water. This process is also extremely well-suited to energy generation via electromagnetic induction. About 90% of all electrical power in the US is generated using steam turbines both in conventional coal, natural gas, fossil-fuel, and also through nuclear power plants.

A third method of energy extraction is via the photoelectric effect. This is the method that solar cells employ to harvest energy directly from sunlight. The efficiency of this technique is determined by how many photons are required to generate a single electron. Modern solar power plants, however, usually employ a different method to harvest energy. The technique is known as solar collection. In this technique, mirrors and lenses concentrate the sunlight from a large area to a small area. Once in a small area, the sunlight can be used to heat water in a closed steam system, as in solar towers. This steam is then used to drive a steam turbine, which drives a generator, which generates electricity. Usually the mirror is shaped like a parabolic trough, and instead of heating water directly, it heats molten salt and then the molten salt is used as a heat source for the power generation system.

Solar power is considered to be renewable since it harvests energy directly from the sun, the effectively continuous free energy source.

The largest solar power station in existence, in the Mojave desert of California, generates 354 MW of power. In comparison, the Three Gorges Dam hydroelectric power station located on China's Yangtze river, generates 18.5 GW of power, and, when finished, is intended to generate a total of 22.5 GW of power. As of yet, solar power hasn't yet reached a generating level of even 0.1% of the world's electricity requirements.

A fourth method of energy extraction comes from the energy contained in matter itself. Nuclear power currently works by exploiting the chain-reaction properties of U-235, an isotope of Uranium. In a nuclear power plant, the runaway chain reaction is usually moderated by water and other slow-neutron absorbing substances, like graphite. Nuclear reactors generate heat in abundance. This heat is used to heat a second, insulated water cycle and generate steam, which then drives a turbine and a generator to make electricity. But it is also possible to use molten sodium, an excellent neutron absorber, to transfer the heat of the nuclear reaction, in a so-called liquid metal reactor.

Uranium is about 40 times more commonly occurring naturally than silver, and so it is hard to prevent technologically advanced nations from acquiring it. For instance, in Israel, the sands of the Negev desert contain trace amounts of Uranium. The separation of U-238, the isotope of Uranium that makes up 99.3% of naturally-occurring Uranium, from U-235 (that makes up the rest) is complicated.


The Diablo Canyon nuclear power plant
Source: PG&E via Power Plants Around the World

Nuclear power presently generates considerably more energy than solar power. For instance, the Diablo canyon nuclear power plant in California generates 2.4 GW (7.5x the largest solar power plant) and makes up about 20% of the Northern California power grid.



Nuclear power accounts for 14% of the world's electricity requirements today.

However, nuclear power has some serious drawbacks. The disposal and storage of radioactive waste (particularly spent nuclear fuel rods) presents problems that, while they can be solved, are nonetheless controversial. The Östhammar facility in Sweden shows promise for treating this problem with the proper respect and care required for 100,000-year storage systems. The site was chosen partly because the rock at the waste-storage level is relatively free of fractures. After the rock is excavated, two tons of spent fuel is stored in 25-ton copper canisters. Each copper canister is then welded shut using a special robotic welder and robotically deposited in an individual tunnel in the repository. Then, bentonite clay is injected into the tunnel, mixed with water, to expand into place. This forms a watertight barrier that is essentially earthquake-proof. The storage repository is scheduled to open in 2025.

Portable Energy Storage

Once energy is harvested and converted to electricity, it may be stored for later use and carried around. You can consider fuel to be a portable energy storage system also, although usually fuel needs to be combusted and this makes it normally unsuitable for battery usage. But even this axiom is being challenged by the fuel cell.

Portable energy may be stored in several ways. The first is a battery, which generates electricity through electrochemistry. The second is a capacitor, which stores energy in its electric field. The third is a fuel cell, which, similar to a battery, uses an electrochemical reaction to generate electricity.

We grade portable energy storage systems on:

  • capacity, which is the amount of electric charge they can store
  • charge time, the amount of time required to return the device to full or substantial charge
  • discharge rate, the maximum amount of constant current the device can produce
  • energy density, a measure of how much energy the device will produce by weight

Each characteristic is useful for different uses. For instance, in an electric car, the energy required to start the engine and move the car from a standing start is related to the discharge rate. Also, a device must be light in relation to the amount of energy it contains, and thus energy density must be high for an electric car battery.

On Hydrogen Fuel Cells as an Energy Source

The energy density of hydrogen gas is the highest of all chemical sources, in excess of 120 kiloJoules per gram. The energy density of a lithium-ion battery, in contrast, is only about 0.7 kiloJoules per gram. Of course, nuclear material such as U-238 has an energy density of 20 gigaJoules per gram. Antimatter contains a theoretical maximum of 180 teraJoules per gram. Presently, nuclear material and antimatter are unsuitable for portable energy storage systems due to the weight of a nuclear reactor and the general unavailability of antimatter.

All above considerations point to hydrogen fuel cells as the most likely successor for portable energy systems. Per weight, hydrogen fuel cells are about three times the energy density of gasoline. The volume of hydrogen, even stored as a compressed gas, far outstrips that of gasoline for comparable amounts of energy generation. This makes hydrogen use a bulky problem.

Also, most all the world's hydrogen production emits CO2, since it uses the steam methane reforming process. So some improvement is needed to cut down on its carbon footprint.

Improving Batteries

The characteristics of a capacitor are short charge time and fast discharge rate, really the opposite of a typical battery. This is why several companies are trying to merge the two technologies to get the best of both.

Supercapacitors are a new technology which promises to replace the battery as we know it. One valuable attribute of supercapacitors is the apparent ability to charge and discharge thousands of times, making the device stable enough to outlive the device it is intended to power. The main problem with super- and ultra capacitors (battery-capacitor hybrids) is the energy density. The capacitance of these devices is directly proportional to the electrode surface area. The use of materials like activated charcoal (with its unbelievably large surface area) have increased the energy density into the usable domain. The promise of nanotechnology, such as nanotube carbon filaments, also can lead to high surface-area solutions and still greater energy density.

How Energy is Consumed

The amount of electricity used per year by humanity is in excess of 20 petaWatt hours per year. The US uses about 20% of that, and China uses another 20%. This doesn't include the energy produced by internal combustion engines, or by burning coal, wood, and kerosene for heating. When all energy consumption is added up, total annual consumption is 474 exaJoules.

It is interesting that world energy consumption decreased 1.1% in 2009, due mainly to economic downturn in North America. But that trend doesn't seem to be a continuing story, since in 2010 world energy consumption grew about 5%. In particular, China's energy consumption did not decrease in 2009, and consequently it is now the world's largest energy consumer, at about 18% of global energy consumption.

What's Next?

In part 2, we will drill down farther into how energy is consumed, and discuss what we can do to cut down on energy consumption, and what is already happening in that regard. We will also discuss the thorny issues surrounding fossil fuel usage. Also, the carbon footprint of energy production and consumption will be discussed. Which energy sources have the smallest carbon footprint? It's not obvious at all.


15 comments:

  1. If we define a work equation that expresses the amount of knowledge created or destroyed (e.g. LOCs), and then relate this to potential energy given by the degrees-of-freedom in the software platform, perhaps we can posit there is another method of energy extraction that exists in the information domain.

    ReplyDelete
    Replies
    1. Well, literally computing takes energy. And the minimization of the energy required is a very important task.

      But otherwise I'm not sure what you are getting at. Still, relating two disparate concepts is a great way to get to new ideas.

      Delete
    2. But what is the source most costly energy is consumes?

      Programmer time.

      Delete
    3. Here is some theory I wrote down at copute.com over a year ago, that might elucidate how I am relating information amd work. I am unable to link directly to the subsection, so I will insert it here.

      ============
      Efficiency of Work

      DEFINITION Efficiency of work is the ratio of the work output (i.e. performed) divided by the work input, i.e. the efficiency is 100% minus the work lost to friction.

      The lower the friction, then less power is required to do the same work in a given period of time. For example, pushing a cart on wheels, requires much less power than to push it without wheels, or to push it uphill on wheels. The ground rubbing against the bottom of the cart, or gravity, are both forms of friction. The rubbing is analogous to the permanent bending of the metal bar in the Fitness section, because the top of the ground and the bottom of cart are permanently altered. The gravity is a form of friction known as potential energy.

      Given the friction is constant, then the input power (and thus input energy) determines the rate at which work can be completed. If the type of friction is potential energy, then the more work that is performed, the greater the potential energy available to undo the work. This type of potential energy is due to the resistance forces encountered during the work to produce a particular configuration of the subject matter. For example, a compressed spring wants to push back and undo the work performed to compress it.

      ============
      http://en.wikipedia.org/w/index.php?title=Energy&oldid=435292864

      Stored energy is created whenever a particle has been moved through a field it interacts with (requiring a force to do so), but the energy to accomplish this is stored as a new position of the particles in the field—a configuration that must be 'held' or fixed by a different type of force (otherwise, the new configuration would resolve itself by the field pushing or pulling the particle back toward its previous position). This type of energy 'stored' by force-fields and particles that have been forced into a new physical configuration in the field by doing work on them by another system, is referred to as potential energy. A simple example of potential energy is the work needed to lift an object in a gravity field, up to a support.

      ==================

      Since the goal is to get more configurations (i.e. programs) in the software development system with less work, then these resistance forces must be reduced, i.e. increase the degrees-of-freedom so that fitness is closer to 100%. Visualize an object held in the center of a large sphere with springs attached to the object in numerous directions to the inside wall of the sphere. These springs oppose movement of the object in numerous directions, and must be removed in order to lower the friction and increase the degrees-of-freedom. With increased degrees-of-freedom, less work is required to produce a diversity of configurations, thus less power to produce them faster. And the configuration of the subject matter which results from the work, thus decays (i.e. becomes unfit slower), because the resistance forces are smaller. Requiring less power (and work), to produce more of what is needed and faster, with a greater longevity, is thus more powerful (efficient) and explains the wisdom of a forgotten proverb...

      Delete
    4. I see some errors in my prior comment (e.g. energy is not a force, yet a force can create kinetic or potential energy) and the one that follows, but I don't feel like copy editing at this time and hopefully my point remains valid.

      ==================
      DEFINITION: Knowledge is correlated to the degrees-of-freedom, because in every definition of knowledge one can think of, an increase in knowledge is an increase in degrees-of-freedom and vice versa.

      Software is unique among the engineering disciplines in that it is applicable to all of them. Software is the process of increasing knowledge. Thus the most essential characteristic of software is that it does not want to be static, and that the larger the components, thus the fewer the degrees-of-freedom, and the less powerful (i.e. efficient) the software development process.

      Communication redundance (i.e. amplitude) is a form of power, because its utility exists due to the friction of resistance to comprehension, i.e. due to noise mixed with the signal. The signal-to-noise ratio (SNR) depends on the degrees-of-freedom of both the sender and the receiver, because it determines the fitness (resonance) to mutual comprehension.

      The difference between signal and noise, is the mutual comprehension (i.e. resonance) between the sender and the receiver, i.e. noise can become a signal or vice versa, depending on the fitness of the coupling. In physics, resonance is the lack of resistance to the change in a particular configuration of the subject matter, i.e. each resonator is a degree-of-freedom. Degrees-of-freedom is the number of potential orthogonal (independent) configurations, i.e. the ability to obtain a configuration without impacting the ability to obtain another configuration. In short, degrees-of-freedom are the configurations that don't have dependencies on each other.

      ==============
      http://en.wikipedia.org/w/index.php?title=Resonance&oldid=432632299#Resonators

      A physical system can have as many resonant frequencies as it has degrees of freedom.

      http://en.wikipedia.org/w/index.php?title=Resonance&oldid=432632299#Mechanical_and_acoustic_resonance

      Mechanical resonance is the tendency of a mechanical system to absorb more energy [i.e. less resistance] when the frequency of its oscillations (i.e. change in configuration) matches the system's natural frequency of vibration (i.e. natural change in configuration) than it does at other frequencies.
      ==============

      Thus increasing the number of independent configurations in any system, makes the system more powerful, requiring less work (and energy and power since speed is important), to obtain diversity within the system. The second law of thermodynamics says that the universe is trending to maximum entropy (a/k/a disorder), i.e. the maximum independent configurations. Entropy (disorder) is a measure of the relative number of independent possibilities, and not some negative image of violence or mayhem.

      This universal trend towards maximum independent possibilities (i.e. degrees-of-freedom, independent individuals, and maximum free market) is why Coase's theorem holds that any cost barrier (i.e. resisting force or inefficiency) that obstructs the optimum fitness will eventually fail. This is why decentralized small phenomena grow faster, because they have less dependencies and can adapt faster with less energy. Whereas, large phenomena reduce the number of independent configurations and thus require exponentially more power to grow, and eventually stagnate, rot, collapse, die, and disappear. Centralized systems have the weakness that they try to fulfill many different objectives, thus they move monolithically and can fulfill none of the objectives, e.g. a divisive political bickering with a least common denominator of spend more and more debt[16]...

      Delete
    5. ...continued from prior comment...

      Thus in terms of the future, small independent phenomena are exponentially more powerful than those which are large. Saplings grow fast into trees, but trees don't grow to moon (nor to end of the universe). The bell curve and power law distributions exist because the minority is exponentially more efficient (i.e. more degrees-of-freedom and knowledge), because a perfectly equal distribution would require infinite degrees-of-freedom, the end of the universe's trend to maximum disorder, and thus a finite universe with finite knowledge. It is the mathematical antithesis of seeking knowledge to have socialism (equalitarian) desires for absolute equality, absolute truth, or perfection in any field.

      The organization of matter and natural systems (e.g. even political organization) follows the exponential probabilistic relationship of entropy and the Second Law of Thermodynamics, because a linear relationship would require the existence of perfection. If the same work was required to transition from 99% to 100% (perfection) as to transition from 98% to 99%, perfection would be possible. Perfection is never possible, thus each step closer to 100% gets asymptotically more costly, so that perfection can never be reached. This is also stated in the Shannon-Nyquist sampling theorem, wherein the true reality is never known until infinite samples have been performed (and this has nothing to do with a pre-filter!). The nonexistence of perfection is another way of stating that the universe is finite in order, and infinite in disorder, i.e. breaking those larger down to infinitely smaller independent phenomena.

      Delete
    6. Lots of heady concepts!

      1) programers use energy to produce code and the very use of the word "code" by me means it can be impenetrable and complex, especially to another programmer. This necessitates wasted energy, produced by the work required to understand it.

      2) It is desirable to make code easier to produce, in a practical setting. But the way one programmer thinks can be so different from another. I wouldn't dream for a minute that most people would think like I do.

      3) In my own little world, I can create an application that suits my own specific tastes. Even my theories about stuff, without constant checking and balancing with the rest of the world (ROW), will tend to become increasingly specific and solve a problem that may not apply to the ROW. I would like to therefore doing significant work avoid solving No Known Problem. Bob Lanston had problems with this sometimes. But...

      I have done this several times, getting quite far down a road that was fascinating and consistent within itself, but simply not applicable to the ROW. Therefore...

      Marketing is the application of principles that help to solve this problem.

      1) Know you audience. Know what they want. Figure out what they need. Do research. Ask questions. Be aware, and relate your findings to your work.

      2) Rethink your theories so they are in terms of the audience's context. Make your PR relevant to the audience, even attractive. This shameless sell-out is necessary to anything which is going to last. Einstein did this by imagining trains in stations and rowboats. Gestalt is important. Metaphor is necessary when a concept and the solution it implies is beyond the grasp of the ROW.

      3) Purely theoretical problems, such as the Expression Problem, are interesting in their own right but, to make a real difference, their solution must be tied to a real-world benefit. And quickly, else the work is wasted. Or perceived as wasted, which can be just as bad.

      4) the bigger the relevance to the ROW, the bigger the impact of the application. And the bigger the rewards.

      My point is that, for me, having ideas was never good enough in its own right. And I realized that I had to remain anchored to a relevant reality.

      That being said, there is a path to the future that relates programming to matter and energy: the expense of computation / the embodiment of computation. That which computation is embedded in will become smaller and more flexible over time. This will change a lot of the necessities that we currently have because the model for computation will change. The necessities for computation will change with our increasing intertwining with the ability to compute with everyday objects.

      An example of this is the emergent "internet of things". But that's only the start.

      This is the area that theorists need to be concentrating on: the physical implications of programming on everyday life. And as programs become more and more distributed, the physical organization of computing itself.

      You are a genius, Shelby. I'm just hoping to provide a little direction here. ;-)

      And I am aware that I can be totally full of it as well, so take it all with a grain of salt, please.

      Delete
    7. Re: #1 - 3, the conceptual solution I am working on is modularity. If we can make algorithms finely grained, orthogonal, and comprehensible in their documented APIs, perhaps we can each produce mutual economies-of-scale in the ROW, as a part of the process of creating our specific mashups of algorithms. I have not proven this is scalable or realistic. Since it is sort of a Holy Grail of computer science, then that pretty much insures it can't be done.

      Agreed, I gave myself a deadline that within a year, I have to produce something concrete or move on.

      Re: #1 - 4, agreed that is why I am getting closer to exiting the research stage and entering the design and marketing stage. I did all the marketing for my prior 2 s/w companies. That was several years of up to $100K corporation gross in the late 1980s, and ditto for coolpage from late 90s until after the eye surguries when I stopped programming. CoolPage achieved a million cumulative users by 2001 (first release was end of 1998), before the social media sites existed (and when the internet population was much smaller). And Altavista was reporting 330,000 active websites created with CoolPage in 2001. But I was really dejected after losing the eye (and other things I haven't shared). I think I haven't been emotionally ready for another commercial venture yet. The universe works in mysterious ways.

      I agree that distributed models of computing are critical. That is one of my areas of study.

      But not only the runtime model, but also the distributed social networking model of s/w development. This is where the Expression Problem might factor into the ROW.

      Any way, I have several times in life cut my losses and discarded a dead-end. I am growing impatient, which is a good sign. That impatience was something I lost for the past decade. When I was talking to Bob Landson in 1993, I was too impatient to spend another evening talking about things I couldn't implement. I wanted to code.

      Delete
    8. Afaict, the singular challenge for deterministic models of distributed computation is fundamentally shared state. This is a degrees-of-freedom optimization problem, thus related to energy.

      The Actor model claims to discard determinism, but this is impossible in the real world, because users of programs need determinism.

      Other models try to be deterministic by employing declarative properties, e.g. RDP is to be declarative over each forward chunk of time, but unless they want to be Turing incomplete, they have to interact with state. The real world involves state. Interaction with state can cause deadlocks and infinite recursion, which can only be resolved by crashing or non-deterministic heuristics. Thus these models side-step the true axis of the problems:

      http://copute.com/docs/Event.html#Declarative_Programming

      I want a model that prevents deadlocks and infinite recursion, which are analogous concepts (A wants to query B before it can reply, and B wants to query A before it can reply, same as A calls B which calls A). Thus I want to prevent topological knots that can't be untied, i.e. which are not homeomorphic to an acyclic graph, c.f. my comment in your Interlocking Shapes blog.

      Turing-completeness (i.e. extension and openness) requires unbounded recursion. And afaik unbounded recursion is not homeomorphic to cyclic graphs. Thus perhaps my goal isn't impossible.

      Delete
    9. Providing yourself a deadline is a good thing. But usually you want to make sure the idea is salable. If you can't make it work, then the point is moot. With something like this, you need to make it demonstrable to sell it.

      It has to appear to be magic. There must be some really good reason for it to exist. I know that waving my hands and telling people that something is worth it basically accomplishes nothing. I could have been barking at a brick wall for as much response as I got.

      This is why you must make it work, and make it slick. This must be your mantra.

      The actor model has been tried with Smalltalk, and I don't believe in it. It's just too crazy.

      Unbounded recursion is a theoretical curiosity, like the Impossible Valknut. In the real world, it isn't really necessary when iteration will generally do as well, or when the problem can be simplified into something that is vastly faster by NOT using recursion. I had a lot of experience in this in the 70s graduating from the Wirth school of programming into the real world of implementability.

      Unbounded recursion is extremely rare in the real world. Even regular expressions don't need it to be implemented properly.

      Delete
    10. Copute doesn't need to be commercially viable, just needs to attract enough use to advance the language to being a professional tool. I am creating Copute, because I am sick of all the other languages. There are few critically important things I am fixing with Copute. Once I have the tool I want, then I will begin to create commercial products again.

      Indeed Regex is not Turing complete, nor is HTML. But Javascript and most programming languages are Turing complete (i.e. unbounded recursion). And one use case is where we need to leave the data types open to extension (click here for an example). The reason that such virtual dispatch is open recursion, is that we can't prevent at *compile-time* the dynamic type from calling us. So I must disagree, unbounded recursion is every where in dynamic real life.

      I had a breakthrough at 3am in a half-dream state. I realized "never lock, always fork (copy)". Distributed version control showed us that "resource contention" should be resolved socially, not by forcing centralization:

      http://www.joelonsoftware.com/items/2010/03/17.html

      "I know, it’s strange... since 1972 everyone was thinking that we were manipulating versions, but, it turned out, surprisingly, that thinking about the changes themselves as first class solved a very important problem: the problem of merging branched code."

      http://esr.ibiblio.org/?p=3940

      "The locking model was founded on two assumptions. First, that that modification conflicts would be frequent and severe enough that they could only be... ". ...automated. I think Eric is wrong. The problem is always that resource conflicts are a social issue, it is just that we haven't designed our data structures correctly. For example, if I want to post a tweet, it should no go in a centralized DB, but rather in my DB, and then my DB should send a notification to everyone on the watchlist in my DB. No locks needed.

      I also realized how to handle the typical drag-n-drop event handling case without any impure functions. Click here for an impure way of doing it, that illustrates the problem (note Adobe software is cited as a use case).

      The key is to put the list of events being watched in a data structure that is the input and output of the handler function. This data structure can be set into a GUI widget at construction time.

      So the big win, is that I don't need deadlock prevention, and I can make the Copute language entirely pure. Thus I can simplify many things, such as never will there be an if without a matching else, which solves the dangling else ambiguity of impure programming languages. And I can eliminate many concepts, such as setters and getters. This was the big design win I had expected from the beginning, but when I was studying how to handle events purely, I somehow got sidetracked from my original thinking that it could be done with pure functions.

      Note there are advances in making immutable data structures as efficient as mutable ones.

      http://copute.com/docs/Event.html#Immutability

      http://stackoverflow.com/questions/1255018/n-queens-in-haskell-without-list-traversal/7194832#7194832

      Delete
    11. Regex are finite automa, not open to new states. They are static, and not dynamic.

      Delete
    12. It is interesting example of inverting a problem, when we convert an unsolvable local contention (e.g. local locks can never be guaranteed to not deadlock in a distributed case) into a global solvable one, as I explained previously. It seems conceptually analogous to how we use limits (i.e. approaching a point but never reaching it) in definite integrals, because when we find the area under a curve, we can't do it by sampling the area under each local point on the curve, because there are infinite such points, i.e. an unsolvable problem. This is because a point on a curve is infinitesimally small. And the inverse of the integral (derivative) gives us the slope at a point on the curve, without having to take the slope between the two points closest to either side of that point (because again there is no such two points, as there are infinite points between any two points on a curve).

      Thus I had the thought that the essense of limits are to make the shape of the function's curve (slope) first-class, i.e. we don't even care if the functions have a value at the limit:

      http://www.khanacademy.org/math/calculus/v/squeeze-theorem

      Delete
    13. Merging branched code is critical in any system where the number of people modifying it exceeds a certain number.

      Delete
  2. This comment has been removed by a blog administrator.

    ReplyDelete