And Speaking of Biologists and Mathematics

Well, I see that Nick has been a little more active this last month.  Thus, it is with great guilt that I try to step beyond my laziness and post something!

First, some announcements:

  • I am going to try keeping the Mathematical Biology Seminar going this Fall quarter, though the meetings will be every other week.  My plan is to focus on Neural Network modelling and the use of NEURON software to implement some simple network models.
  • I would also like to keep the old CAMG meeting going, assuming there is any interest.  Drop a comment is you’d like to join us in exploring \LaTeX, as well as some other software.

One of my summer projects is to look at the 1993 paper, “For Neural Networks, Function Determines Form” by Francesca Albertini and Eduardo Sontag. An interesting aspect of this paper is that for a certain kind of neural network (that is, when the network can be modelled by a particular system of ordinary differential equations), then a particular input/output behavior completely characterizes the weights of the connections between the neurons.

This follows from the form that the ODEs take and that the input functions are going to be continuous.  Then we find a kind of equivalent to the existence/uniqueness theorem seen in any basic differential equations course.

The first question that comes to my mind is: are these ‘neurons’, as modelled, biologically relevant?  That is, do ‘real’ neurons behave that way?  Certainly, real biological networks can (and have) been modelled this way – though those networks tend to be networks using electrical synapses (gap junctions) rather than chemical synapses, so the ODE model is actually quite an accurate depiction of the electrical circuit.  My concern is with those networks using chemical synapses and the propogation of an action potential down an axon.  This introduces a time component into the model that, I think, needs to be addressed.  Of course, it *might* be addressable by appropriately adjusting the network weights.

The second question has to do with experimentation: What is the least amount of information the biologist needs to acquire to determine the network weights.  How much error is introduced by experimental method?  Perhaps an example is in order:  Imagine that you are studying the hearing pathwayin a mouse.  Technically, the network you are looking at is the entire brain (since we are keeping the mouse alive), though perhaps we can limit the ‘network’ to that of the hearing pathways.  But our ‘input’ is not a nice function we plug into a neuron with a signal generator, it’s a recording of a mouse squeak.  So we aren’t sure exactly what the inputs are, and our output is what we record from a single electrode placed into the mouse’s brain.

Certainly we can get some information this way, but what network information can we develop?  Even given a far simpler network with a well-defined input, what information about the network topology can we derive from the input/output behavior?

Also, can a ‘large’ network emulate a ‘smaller’ one?  What do ‘large’ and ‘small’ even mean in this context?  Number of neurons?  Number of connections (synapses)? Since we are looking at directed networks (paths tend to be one-way), how many recurrent (feedback) connections are there?  This is going to be my focus for a while, so expect more on this subject as I work my way through the paper.

Of course- there is more to talk about than just neural networks!  Also, I’ll spare many of the details of my analysis here (though probably I’ll put them up *somewhere*) and focus on the results.

That’s it for now!

ex animo-

Felicis

Advertisements

One Response

  1. I’d recommend looking into the 70s cybernetics people, who worked on this stuff before it became unfashionable, particularly Ross Ashby as he was really good at making it accessible.
    Perhaps large/small could be defined in terms of degree’s of freedom, with highly constrained neural networks actually being pretty small. But also there is a simple human meaning to the term, which is the bother of actually putting a network together. When comparing these definitions, perhaps what you are after is functionally equivalent but more efficient structures. Doing the same actual calculation with less.
    Ashby’s formulation of Variety in input/output transformations was a generalisation of shannons information theory amongst other things, and seems to be relevant here, as is his idea of closed (determinant) transformations: You know when to stop adding variables to measure when you have a repeatable phenomena. Or rather, when it is repeatable to a certain level of accuracy, which gives you your likelihood measure. (Size of acceptable statistical gremlins etc.)
    Essentially you try to build as minimal a theory as possible, that follows the assumption of determinism and includes the phenomenal properties you are interested in, often as components of a larger vector.
    Wiener produced a theory that it is statistically possible to produce the structure of a system from it’s response to “random noise” but I think that may have had some assumptions about the changeability of the system under observation, such as ignoring that a sufficiently loud noise could damage it’s hearing.
    Anyway lots of good stuff there you might be interested in, but pre-chaos theory, so you might need to bare in mind it’s key insights when reading.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: