Showing posts with label amorphous computing. Show all posts
Showing posts with label amorphous computing. Show all posts

Sunday, July 18, 2010

2D stadium wave

I finally got around to making my previous stadium wave simulation run in 2D. It makes pretty patterns as I expected it would. The fascinating thing about it is that there's these interior waves that back propagates as the outer wave spreads out. There's some sort of instability that causes little imperfections (probably due to the imposed spatial lattice) that gets these little eddies started and once their started they tend to collide and make interesting things happen. The simulation is torodial so once the wave hits the edges it interacts with itself and then all kinds of beautiful things happen. (Note, the image looks wider than it really is -- what looks like an oval is actually a circle.)

Tuesday, October 27, 2009

Mexican wave quiver plots



A lesson about interactivity. Having realized that I should be using quiver plots to understand the dynamics of 2-variable differential systems, I went back to the Mexican wave and made a quiver plot. In Matlab it is a slow task to tweak a parameter and re-plot over and over again so I made a zlab (C++) version of it that allowed me to twiddle the parameters in real-time.

What a difference! What took me a week before trying to understand the parameter space of this system now took me only 3 minutes. It is proof again of the power of interactive twiddling. When twiddling the variables the fast response and ability to "scratch" back and forth allows you to quickly intuit both the action and derivative of each parameter. When twiddling I was saying things like: "Oh... this is causing all the arrows on the left to go up" and "this is moving the steady-state point." I understand this system a lot better now.

In the following plots, I show two time traces at different places in space as indicated on the northwest space-time plot.

Plot 1: the stable Mexican wave. The third quadrant of the phase plot means "sitting & excitable (not tired)". Note that there is a stable equilibrium in that quadrant so if the system ends up anywhere in the 3rd quadrant then it falls into that basin and stays there until disrupted by, for example, a neighbor who pulls it towards the standing side.


Plot 2: two stable points. In this configuration, the 3rd and 4th quadrants have stable points so it simply transitions from the starting point in q3 until it gets knocked into q4. I guess if I were to hit it with a pulse of "tired" then I could get it to transition back again so this system is akin to a 1 bit memory. Haven't decided how to take advantage of this yet, but I'm sure there's something cool to be done with it.


Plot 3: oscillator. Now the equilibirum point has been removed from Q3 and so the system just perpetually oscialltes. Note that the oscillating attractor is stable -- the green and blue traces converge into the same limit cycle. I'm not positive what gives it this property, I thought that diffusion might be helping to stabalize it, but that's not the case as shown...

... in the following with no diffusion.

Plot 4: a system that has the Q4 equilibrium right on the boundary with Q1 so that it momentarily seems to be commit but then gets sucked into a small oscillator around the origin. This oscillator, however, does not appear to have a stable limit cycle so it dampens out.

Plot 5: a pattern forming system. Here Q1 and Q2 have a huge shearing force compared to Q3 & Q4. Somehow this seems to make it into a chaotic attractor but I don't really understand how yet.

Tuesday, October 20, 2009

Monday, October 19, 2009

Friday, July 3, 2009

Understanding logic level conventions




Over lunch John clarified a few things for me about the nomenclature used for transfer functions.

A "transfer function" is the model of how a gate/amplifier behaves. Given an input level (voltage for an electrical device or molarity for a chemical one) the model describes the equilibrium (or steady-state) output level. The above graph illustrates a hypothetical transfer function.

The main point of confusion for me was "What exactly is the definition of 'gain'?" and "By what convention are logic levels defined?"

John pointed out that the word "gain" is an over-used / abused word. Many people over-simplify the transfer function graph above and use 'gain' to mean different things. The gain is the slope -- but as you can see the slope of the function is different at different input levels so these is no such thing as "the" gain for a gate.

In the middle, linear range, the slope is roughly constant over an input domain. When building analog devices it is this roughly-linear region that is of interest and so an analog engineer would probably refer to the approximately-constant slope in this linear region as "the gain".

However a digital engineer uses the wider non-linear range to encode a binary variable. In this case, we must now have a convention that defines the logic levels. The electrical convention for this is that the two places where the slope, aka the "incremental gain", are equal to 1 are the places that define the inside bounds of the logic levels. Anything outside of these bounds are considered valid logic levels. Anything inside of them are considered "undetermined". The nominal values (the desired levels to be obtained by any gates) are defined by a "noise margin" outside of these inc. gain=1 points.

Friday, May 29, 2009

Molecular model transfer function

Today I got around to trying out a simplified molecular version of the gate model that will replace my hyperbolic function.



The kinetics are all arbitrary for the model, but the shape of the transfer function looks even better than the made-up model from before. There's an almost perfectly linear section in the middle -- it looks more made-up than my made-up model! This is assuming that all three reactions have the same strength. Next, I need reasonable terms for the three reaction rates.

Sunday, May 24, 2009

More parameter space of "standing" circuit

Using the parameter space maps made last time, I've set the "standing" circuit into a place where it has a nearly symmetric bi-stable steady-state at p1 =0.25 and p2=0.50.



The following is the derivative at a given concentration of standing. This dy/dt vs y plot (I don't know if there is a correct name for this find of plot) shows that there are two stable steady states at the zero crossings -5, and +5. There's also the unstable point near zero. It is not exactly at zero because the gate model functions do not cross at zero as seen below.






Now I continue the analysis with the "tired" half of the circuit. I'm interested in the response of "tired" when the "standing" input reaches 0, the point at which the tired circuit will charge fully.



Charging of the tired circuit when standing is 0 and tired starts at its steady-state value of -5


So, "tired" reaches 0 (the point at which the gate 5 is going to be fully on) within about 20 time units when standing = 0.

The following is a sampling of the parameter space for p1 and p2 given "standing" = 0. The steady-state value of tired changes as a function of p1, so for each graph I've started "tired" off at the appropriate steady-state and then watch the evolution when "standing" = 0. This demonstrates that I can delay both the onset of tired (when it hits zero) and how high tired gets at steady-state by adjusting these two parameters.


Next up, I put the circuit back together again...

Wednesday, May 20, 2009

Parameter space of "standing" circuit

I've been working on decomposing the traveling pulse circuit in order to understanding the parameter space. Today I've worked on the isolated "standing" circuit.



There's two parts. The "pull down" gate that is constantly trying to pull the system to a negative value against the action of the resistor which is trying to pull it to zero. The ratio of the pull down gate (1) to the resistor (RNAase) determines the steady-state level when the feedback gate 3 is not active. The RNAase resistor must be common to all nodes so I treat it as a fixed parameter; I picked the value 0.01 out of thin air for it.

For the following graphs, I pick different starting conditions for "standing" and let this circuit evolve. Each colored trace in the chart is one run of the circuit. Note that there are two steady states. One is about 28 and the other is about -1. If the "standing" value falls below about -0.5 then it goes to the low steady-state and above that it goes high. I like this chart in comparison to typical transform function plots because it lets you see both the kinetics and the steady-states in one place.


Here's the same chart but zoomed in around the origin so you can see that the critical point is about -0.5 which is determined by the gate model.

I varied the two parameters over a range and plotted the parameter space result (best viewed on large monitor).


From top to bottom p1 is increasing. From left to right p2 is increasing. Increasing p2 shifts the steady-state of the "standing" state upwards and thereby separates the two states more dramatically. As p1 is increased -- moving from top to bottom -- both the top and bottom steady-states shift downwards but the bottom one seems to move faster. In the lower left, the two states blur into each other and are poorly defined. So, in general you'd like to push p2 and p1 fairly high but this comes at the cost of slowing down the approach to steady-state as they are pushed further away. When the other half of the circuit is added, p2 value will have to be smaller than p5, so that will determine the upper bound of p2.

Tuesday, May 19, 2009

Complementary logic ideas

Talking with John this morning about the equivalence between the gates we're proposing and electrical analogs. John points out that our gates are like "half of a tri-state gate". We started thinking about higher-order logic cells using the proposed gates and realized that you can be logically complete assuming that you can mix gates with complementary inputs and only lose some fraction of them to a bi-molecular cancellation. If this is not the case -- if you lose everything -- then there might still be a way to do it with extra translation stages, but I haven't thought that through yet.


(Image update 21 May. Thanks to Erik for pointing out that I forgot the promoter completion domain.)

Assuming that the above gate cancellation reaction is not favorable (or that tethering them reduces the favorability) then you could combine the gates to make buffers, inverters, and a biased-and-gate that doesn't produce a very clean output, but which would have the property that when inputs A & B are + then output would be + and all other input combination would give output slightly - to very -.

Traveling pulse - a stable orbit


I started hunting around in parameter space trying to get my head around what makes the traveling pulse stable and predictable. I don't yet have a set of exact rules, but what I've learned is that the reactions need to be slow compared to the diffusion. This is achieved by simply lowering the concentration of the gates and resistors appropriately. Next, the pull down gates 1 & 2 are very small compared to the feedback and shutdown gates. Also, the "tired" charging gate is very small so that you can delay the onset of the shutdown.

The biggest point is obvious when you look at the phase diagram: you have to let the system get back into steady-state before another pulse hits it. Also interesting is how perfectly straight are the edges of the phase diagram. I think that this means that the gates are run way out of their linear regions and are running in steady-state most of the time. I'm going to try to make a graph to make sense of that.

I also found that it is easy to make complex patterns form when you push the system really hard as in the following class-3-like cellular automata. Note that the system was started with symmetric initial conditions and has full symmetric rules yet is symmetric only until it starts to interact with itself; once it reaches the boundaries, it becomes asymmetric. Fascinating. I suppose this is because the "periodicity" of the pattern is not related to the size of the container so the two periods start to alias in some weird sense.

Thursday, May 14, 2009

Traveling Pulse Phase Diagrams


Working on understanding the behavior of my amorphous traveling pulse, "Mexican Wave". On the right is a marked up phase diagram of the two states "standing" on the x axis and "tired" on the y axis. The mark ups show the regions where different parts of the circuit are operational. This has helped me get my head around what has to be adjusted to make the system more predictable. One lesson is that the mystery of why the pulse is traveling at different speeds has something to do with the fact that the system does not usually get all the way back down into the same steady-state. The bottom steady-state point "not standing and not tired" should be determined by the relationship of the pull down gate 1 & 2 and the grounding resistors. So, next thing I'm going to do is try to adjust things so that I give the system enough time always settle down into that same point. Then I can tackle understanding how the other gates reshape this phase chart.

An observation. The one directional traveling pulse on the left is making a pattern that looks like the branching pattern on a plant stem. This reminds me of a plant branching model Wolfram talked about in NKS.

Wednesday, May 13, 2009

More fun with Traveling Pulse

I started messing around today with the amorphous traveling pulse from yesterday. First thing I did was try creating an asymmetric starting condition by "pipetteing" in both a spot of "standing" as yesterday and also a spot of "tired" adjacent to that so that the pulse could travel only in one direction. As before, the x axis is cyclical space which is why the pulse travels off to the left and then reappears on the right.

Inexplicably, the pulse does not always travel at the same velocity. I have no idea why, maybe its an artifact of the integration but it seems periodic -- like its accelerating and decelerating at some predictable way.

I then start exploring parameter space of the circuit, repeated here for reference.


(Drawing revised 19 May)


I started with 3 vs 5. All things being equal, it should be the case that the concentration of gate 5 needs to be greater than the concentration of gate 3 so that it can overpower "standing" when "tired". As the following phase chart of 3 vs 5 illustrates, this is true. Also, as 3 grows so does the pulse width. This is intuitive because the harder p3 works to pull up "standing", the longer it will take for the discharge circuit to overpower it. Graph of P3 vs P5:

Then I started on P3 vs P4. P4 determines how fast it gets "tired" so more P4 should create a narrower pulse width, which is indeed the case. As you would expect, there's a limit, P4 can make the system tired so quickly that the pulse disappears (it becomes tired the instant it stands). However, there's a relationship between P3, the charging circuit and P4 the "getting tired" drive. As the standing driver is increased, you have to compensate with fast you become "tired". Makes sense. Ratios in the kind of 5-7 ball park seem to work well given the arbitrary other settings I have. Graph of P3 vs P4:



Crazy things happen when you change the two stabilizing gates p1 and p2. When pull down resistors are set to 0.01 and diffusion to 0.3 in this simulation. As p1 increases the pulse travels slower which makes sense as it is harder to charge standing. (Thanks to Xi for pointing out that I had previously stated this backwards.) At some critical value, it the charge circuit can't keep up with the diffusion and pull down sides and the pulse evaporates. Really weird things start happening around p1=0.01 and p2=0.07, looks like it becomes unstable and pattern forming, which is cool.


Some close ups of instability patterns. They look a like Sierpinski triangles which makes some vague sense because the standing and tired are in opposition to each other and can act as some kind of binary counter where diffusion permits the next space over to act as the carry bit. (I say this with while waving my hands furiously :-)


Tuesday, May 12, 2009

Traveling Pulse Amorphous Computer

After a few meetings with John, Nam, Xi, Edward, and Andy in the last few weeks I think I have a plausible molecular gate model that can make some interesting amorphous computations. Specifically, I've been trying to make the "Mexican Wave" -- an amorphous pulse wave.




A variable "A" is encoded by the log ratio of the concentration of two RNA species: a sense strand called "A+" and its anti-sense strand called "A-".




(Image updated 21 May -- Thanks for Erik for pointing out I left off the promoter completion domain)

Gates are molecular beacons that use promoter disruption to squelch the generation of some output strand. For now, all gates are unary operators. The RNAs can be displaced off the beacons by toe-hold mediated strand displacement. This design is basically Winfree lab's transcriptional circuits but where the gate is a hairpin DNA molecular beacon and where variables are encoded by log ratio of sense and anti-sense instead of as a proportionality to concentration of an ssRNA.



(Note I updated this diagram to change the naming convention on this 17 May 2009. Again on 21 May thanks for Erik for noticing I left off the promoter completion domain.)

Gates are modeled as having hyperbolic production curves and can be built according to one of four choices of sense and anti-sense sequence on the inputs and outputs. As a matter of convention, the sense strand is labeled "+" relative to the ssRNAs, not relative to the DNA because the concentration of the RNAs is the variable of interest in these systems.

To explore the model, I created a circuit that I hoped would make an amorphous pulse propagating wave. Below, I switch into electrical analogy which I do for my own sanity. The charge across capacitors represent the two variables which I call "standing" and "tired" by analogy with the Mexican Wave. The gates are labeled like "i+o+" meaning "when input is + the output will be -". (I've changed around the naming convention several times, this update is as of 17 May) The gates without inputs are under constitutive promotion and are labeld only by what they output. All nodes are pulled down by the same RNAases represented here as resistors to ground from each capacitor. The two variables are assumed to diffuse at equal rates. The only changeable parameter is assumed to be the concentrations of the gates.


(Thanks to Xi and John for help reworking this diagram. I updated it on 19 May.)

This circuit can be thought of like this. "Standing" and "tired" are constantly being pulled low by the gates 1 & 2 against the action of the resistors. If the rest of the gates weren't there, this would ensure the system will be "not standing" and "not tired". Gate 3 puts feedback on "standing" thus a small threshold level of "standing" will generate more until it saturates in steady-state against the resistor. Gate 4 increases "tired" if "standing". Gate 5 is in high concentration relative to the other gates and can thus overpower the "standing" variable when "tired".



Here are the 1D amorphous results. The two plots are "standing" (left) and "tired" (right). The X axis of each is space (cyclical coordinates). The Y axis from bottom to top is increasing time. Blue represents a high ratio of - to + strands. Red represents a high ratio of + to - strands. Black represents an even ratio. At time zero, a pulse of + is added to the "standing" variable representing a manual pipetteing operation at some point in space. As time passes (bottom to top) the pulse propagates in both directions at a constant rate until the two pulses hit each other and then stop.

Sunday, January 11, 2009

Mythbusters accidentally create a reaction-diffusion-like system


This short video clip shows that the Mythbusters appear to have accidentally and unknowingly created a kind of reaction diffusion system in their "Trailblazers" episode when they ignited a trail of gasoline. If you look closely behind Adam you'll see waves propagating in a manner reminiscent of various reaction diffusion and cellular automata systems. I think what's going on here is that the gasoline vapor and the moving ignition creates a two dimensional amorphous relaxation oscillator. Remember that it is the vapor of gasoline that is flammable, not the liquid. As the fuel evaporates into vapor it takes a few moments before it reaches an ignitable fuel to air mixture. When it does, a wave of flame propagates over the area thus eliminating the vapor which then slowly re-accumulates until it ignites again when it encounters an ignition wave from some other region. The exact position of the flame is highly sensitive to the environment and initial conditions thus the system turns into a set of chaotic cyclically flammable domains that move around in a fascinating manner. Apparently without knowing it, Jamie and Adam have stumbled upon a quite lovely piece of science. I've got to try to reproduce this!

Thursday, January 1, 2009

Paper


Been working all week with Andy, Xi Chen, and Nam on a paper. Using the kinetics from Jongmin Kim's bi-stable switch paper, Nam produced a nice simulation of the amorphous ring oscillator. Happily, these images look much like my earlier, cruder, simulation but now have dimensions. Features are measured in mm and time in hours. I think that's pretty cool -- a molecular scale device producing features at the mm scale. Would be great if it actually works when we try it someday!



Also from this this paper, Andy, Xi Chen, and I came up with a hopefully plausible complementary transcriptional NAND gate. The idea is that all signals are encoded by the sense and anti-sense complements of an RNA sequence. For example, signal "A" is high when some specific RNA sequence is high and it is low when the anti-sense of that sequence is high. The hypothetical gate is made from two complementary promoters on opposite sides of an double stranded DNA. On the left side, two molecular-beacon-like devices sequester half of a promoter that activates only when both inputs are high. On the right side, a single hairpin is folded such that a promoter is normally active but is deactivated when A and B invade (thanks Xi Chen). To work, the kinetics will have to be very delicately balanced so maybe it won't work well but at least it's a conceptual step in the right direction; we've been talking about a CMOS analog for years now and this is the first time we've made any conceptual progress.

Tuesday, December 23, 2008

Diffusively coupled oscillators

Having previously seen the effects that oscillators near 180 degree phase-boundaries run faster, I conducted the experiment of isolating two oscillators and varying the diffusive constant between them (thanks John). This first graph shows the time trace of the two diffusively-coupled oscillators started 180 degrees out of phase. Note how both amplitude and wavelength are different early-on compared to later when the two have synchronized. The next graph shows the peak frequency of the first phase (before phase lock) as a function of diffusion.


I know this is a well-known phenomena and is the basis of reaction diffusion systems so I started hunting for references. First google hit was: PRL 96 054101 - Daido and Nakanishi - Diffusion Induced Inhomogeneity in Globally Coupled Oscillators. They show various facets of a similar system without regard to spatial dynamics (like this experiment). They refernce Erik's father's (A T Winfree) book The Geometry of Biological Time which looks like a must read. They also reference an interesting sounding book: Synchronization - A Universal Concept in Nonlinear Sciences which looks like another must read and the UT library has an online copy!

Monday, December 22, 2008

Feature size varies to the 1/2 power in diffusive latches


I extended yesterday's bi-stable diffusive latach over a larger diffusive range. It was roughly linear over a 0 to 1 domain. It is proportional to the 1/2 power over a larger range. I took Edward's advice and plotted it with error bars and just ignored the deviation information. Here is the sampling over 30 trials with differing random small initial conditions with 1 SD error bars.

Sunday, December 21, 2008

Latch with diffusion


Feature size changes with diffusion ... more diffusion, bigger features.


Today I played around with how the diffusion coefficient effects the formation of pattern in the simple latch case. This is an array of bi-stable switches with uninitialized starting conditions (i.e. a little bit of noise). The feature size varies directly with the diffusion. The graph shows that the mean feature size (blue, multiple trials) rises fairly linearly with diffusion as does the variance (standard deviation plotted in red, same trials). There's probably a sexier way to make this plot with error bars or something, I'll think about that.

Wednesday, December 10, 2008

Pattern formation phase experiments

Today I played around with trying to understand where the patterns come from in simple oscillators. In the following picture, the center region is exactly 180 phase shifted relative to the outside (circular "boundaries" as always). Note the cool reconnection events about 1/3 and 2/3 the way up from the bottom (t=0).



The interesting thing here is that the boundaries begin to oscillator faster than the surrounding regions. The center goes through 6 cycles in the time it takes the boundary to go through 7. At 7 edge-cycles versus 6 center-cycles there's a disconnection event where the two regions become disjoint and then reconnect one cycle later. These discontinuities are where interesting patterns emerge.

Why should it be that the boundary oscillates faster than the center? This is a bit counter-intuitive. Imagine two oscillators sitting next to each other and diffusing some of their energy into each other. Consider the moment when the first oscillator is at its maximum value and the second is at its minimum. At this moment the first oscillator is dumping a lot of its material into its neighbor. In other words, right when the second should be at its minimum value it is instead being "pullled forward" by the incoming flux. Conversely, by dumping flux into its neighbor, the first never quite makes it to maximum value and thus sort of short-cuts it way to the downward part of the cycle. Half a wavelength later the reverse is true. Thus, both oscillators act to pull the other one ahead and thus they both run a little faster as their amplitudes are reduced.

As noted before, work on coupled oscillators is as old as Huygens 1665 paper. Here's a more recent synthetic biological investigation from Garcia-Ojalvo, Elowitz, and Strogatz. What I haven't found yet (probably because I haven't looked hard yet) is a paper showing the spatial dynamics of such coupled oscillators as demonstrated here.

So what happens when the two regions are not started exactly 180 out of phase? Yet another interesting instability forms. Here's the same thing at 170 degrees:



This time the boundary between the two regions begins to wobble around as the two sides compete for control of the boundary space. This instability also creates interesting disconnection / reconnection events around 7 cycles. And what if we symmetry break the size of the two areas? Here's 180 degree separation with the center region being a bit smaller than the outer:



Now you see the unstable edge oscillation like above case after the perturbations travel all the way around and end up interacting with the center during the second reconnect event. Clearly such patterns are all reminiscent of diffraction scattering and other sorts of complicated spatial pattern-forming phenomena where waves are bouncing around inside of closed spaces -- I find all such phenomena hard to intuit and these examples are no different. Where things get fun IMHO is to see how noise plus such simple oscillators generates interesting formations as the ones I posted a few days ago.

Several people have asked me what is the relationship is between these simulations I'm showing here and cellular automata? I argue that these systems are analog, memoryless versions of CAs. While CAs are very logically simple, they aren't nearly as hardware simple as the systems I'm working on here. For example, Wolfram's lovely illustration of all 256 binary 1D CA rules are simple rules, but their implementation presupposes both memory and an a priori defined lattice that includes left/right differentiation. However, as Wolfram points out on page 424 of NKS, the symmetric 1D rules do generate interesting short-term random patterns when initialzied with random state so these are a good binary model for the analog systems pre-supposed here.

Meanwhile, my friend Erik Winfree's lab has very cleverly built DNA crystaline structures that have do define a lattice and thus can implement the Turing comple rules at molecular scales. But on the scale of complexity, I'd argue that these amorphous analog systems are "simpler" in the sense that I can more easily imagine them evolving from interacting amplifiers that would have independent precursor functionality and without imposing a lattice. Erik might disagree, but anyway, it's this idea of evolving-interacting-amplifiers that I'm going work on as I continue this.

Tuesday, December 9, 2008

Pattern formation sanity check



I ran a test where I changed the spatial resolution of the ring oscillator system (changing the number of spatial buckets while also changing the capacitance and conductance variables accordingly) to make sure that the pattern formation is not an artifact of the integration technique. These images show a 32, 64, and 128 bucket integration in each. It is clear that the spatial resolution matters in the sense that you can see a few small changes (features look temporally sharper, not just spatially), but I don't think that the pattern formation is an artifact. As always, thanks to JHD for help in working out the right parameter transformation -- which he knows like the back of his hand because its equivilent to a transmission line / heat equation.