7th Workshop on Biological Distributed Algorithms
July 29th, 2019 in Toronto, Canada
DoubleTree Hilton Hotel Toronto Downtown (108 Chestnut Street)
Meeting Room: Toronto Room
We are excited to announce the 7th workshop on Biological Distributed Algorithms (BDA). BDA is focused on the relationships between distributed computing and distributed biological systems and in particular, on analysis and case studies that combine the two. Such research can lead to better understanding of the behavior of the biological systems while at the same time developing novel algorithms that can be used to solve basic distributed computing problems.
Invited speakers
- Danny Forger, U Michigan [website]
- Joel Levine, U Toronto [website]
- William Ryu, U Toronto [website]
- Jared Saia, UNM [website]
Registration
All attendees must register through the registration website of PODC 2019 at https://www.podc.org/podc2019/registration/. If you only plan to attend the Workshop, select the 'Workshop/Tutorials 1 (July 29th)' option.
Early registration deadline ends June 30th.
Schedule
08:25 - 08:30 - Organziers Welcome
Ants Part I
08:30 - 09:00 - Jared Saia, UNM [Invited] [slides]
Title: Robustly Foraging Using the Golden Ratio
09:00 - 09:20 - Anna Dornhaus, Victor Paat and Kenneth Chapin. [slides]
Title: Collective contests: ant colonies invest in defense according to their opponent, and group size interacts with resources held.
Abstract:
The worst enemies of ant colonies are other ant colonies. Here we empirically study how ant colonies invest into offensive and defensive behaviors, and in particular, how these allocation decisions are made relative to group size, amount of resource to be defended, and the behavior of the opponent. We demonstrate that these factors interact in interesting ways, and that ant colonies may enter into 'arms races' of allocating workers to fights. We argue that this calls for a better theoretical investigation of ant contest behaviors.
09:20 - 09:40 - Arjun Chandrasekhar, James Marshall, Saket Navlakha and Deborah M Gordon. [slides]
Title: Better tired than lost: turtle ant trail networks favor coherence over shortest paths.
Abstract:
Creating a backbone routing network is a basic challenge in many distributed computing applications, including in wireless sensor networks, mobile cellular systems, and robotic swarms. The goal of this backbone is to ensure that any two devices on the network can communicate via some path along the backbone. Unlike terrestrial ant species, foraging trails of arboreal turtle ants are constrained to line along a discrete graph formed by overlapping branches and vines in the forest canopy. For turtle ants, the routing backbone is a pheromone trail used to connect multiple nests and food sources. Here we report that turtle ant trails are not efficient, in the sense that they do not exclusively prioritize minimizing trail length - an objective that is commonly associated with trail formation in other species, and in ant colony algorithms. Instead, turtle ants seek to maximize the coherence (or repeatability) of their trails, which ensures that the colony stays together, at the cost of traveling farther. To our knowledge, this is the first instance where coherence has been shown to constrain foraging trails of ants, and exemplifies how the physical substrate of the environment may affect foraging decisions and thus, algorithm design.
09:40 - 10:00 - Nitika Sharma and Raghavendra Gadagkar. [slides] [pdf]
Title: Spatial organization of food distribution on the nests of the primitively eusocial paper wasp Ropalidia marginata.
Abstract:
In social insect colonies, food transferred through space and time via nestmates carries both
nutrition and information. We followed, spatially and temporally, food brought into semi-natural colonies of a tropical paper wasp Ropalidia marginata, to understand the mechanism and efficiency of its distribution among adults as well as to larvae. Wasps divided the tasks of bringing food to the nest and feeding larvae, among themselves. Using the analogy of the travelling salesman problem and the Hamiltonian path problem, we found that individuals optimized their routes in order to feed the randomly distributed larvae. Within each feeding bout, different feeders randomly fed larval cells resulting sometimes in repetitive feeding of the same cells by different wasps. This lack of spatial segregation of their feeding effort helped provide redundancy that should avoid larvae going hungry. Considering all the bouts put together, the larvae closer to the centre of the colony were fed significantly more frequently than larvae at the periphery. The cause of this variation could be the nest’s geometry and needs to be studied further. The consequence of differential rates of larval feeding can, however, shape the fate of these larvae; previous work has shown that well-fed larvae develop into adults that are more likely to become egg-layers while poorly-fed larvae develop into adults more likely to become
non egg-layers. Understanding the spatial organization of food transfer may be a key to understanding how insect societies achieve efficient social organization and division of labour.
10:00 - 10:30 - Coffee Break 1
Ants Part II + Neuroscience Part I
10:30 - 11:00 - William Ryu, U Toronto [Invited] [slides]
Title: Measuring and modeling how C. elegans explores its environment
11:00 - 11:10 - Hsin-Hao Su and Nicole Wein. [slides] [pdf]
Title: Lower Bounds for Dynamic Distributed Task Allocation.
Abstract:
We study the problem of distributed task allocation in multi-agent systems e.g. the division of labor in an ant colony or robot swarm. Suppose there is a collection of agents, a collection of tasks, and a demand vector, which specifies the number of agents required to perform each task. The goal of the agents is to collectively allocate themselves to the tasks to satisfy the demand vector. We study the dynamic version of the problem where the demand vector changes over time. Here, the goal is to minimize the switching cost, which is the number of agents that change tasks in response to a change in the demand vector. The switching cost is an important metric since changing tasks may incur significant overhead. We study a mathematical formalization of the above problem introduced by Su, Su, Dornhaus, and Lynch. We prove the first non-trivial lower bounds for the switching cost.
11:10 - 11:30 - Anna Dornhaus, Nicole Leitner, Nancy Lynch, Frederik Mallmann-Trenn and Dominik Pajak. [slides] [paper]
Title: Remember the Past and Forget Thresholds.
Abstract:
One of the best known models for task allocation in ant colonies is the so-called threshold-based task allocation model. Here, each ant has a fixed, and possibly distinct, threshold.
Each task has a fixed demand which represents the number of ants required to perform the task.
The stimulus an ant receives for a task is defined as the demand of the task minus the number of ants currently working at the task. An ant joins a task if the stimulus of the task exceeds the ant's threshold.
A large body of results has studied this model for over four decades; however,
most of the theoretical works focuses on the study of two tasks.
Interestingly, no work in this line of research shows that the number of ants working at a task eventually converges towards the demand nor does any work bound the distance to the demands over time.
In this work, we study precisely this convergence. Our results show that while the threshold-based model works fine in the case of two tasks (for certain distributions of thresholds); the threshold model no longer works for the case of more than two tasks. In fact, we show that there is no possible setting of thresholds that yields a satisfactory deficit (demand minus number of ants working on the task) for each task.
This is in stark contrast to other theoretical results in the same setting that rely on state-machines, i.e., some form of small memory together with probabilistic decisions.
The resulting task allocation is near-optimal and by order of magnitude better than what is possible using joining thresholds.
11:30 - 12:00 - Discussion
12:00 - 13:00 - Lunch
Neuroscience Part II + Programmable Matter
13:00 - 13:20 - Yael Hitron and Merav Parter. [slides] [pdf]
Title: Counting to Ten with Two Fingers: Compressed Counting with Spiking Neurons.
Abstract:
We consider the task of measuring time with probabilistic threshold gates implemented by bio-inspired spiking neurons. In the model of \emph{spiking neural networks}, network evolves in discrete rounds, where in each round, neurons fire in pulses in response to a sufficiently high membrane potential. This potential is induced by spikes from neighboring neurons that fired in the previous round, which can have either an excitatory or inhibitory effect.
Discovering the underlying mechanisms by which brain perceives the duration of time is one of the largest open enigma in computational neuro-science. To gain a better algorithmic understanding onto these processes, we introduce the \emph{neural timer} problem. In this problem, one is given a time parameter $t$, an input neuron $x$, and an output neuron $y$. It is then required to design a minimum sized neural network (measured by the number of auxiliary neurons) in which every spike from $x$ in a given round $i$, makes the output $y$ fire for the subsequent $t$ consecutive rounds.
We first consider a deterministic implementation of neural timer and show that $\Theta(\log t)$ (deterministic) threshold gates are both sufficient and necessary. This raised the question of whether randomness can be leveraged to reduce the number of neurons. We answer this question in the affirmative by considering neural timers with spiking neurons where the neuron $y$ is required to fire for $t$ consecutive rounds with probability at least $1-\delta$, and should stop firing after at most $2t$ rounds with probability $1-\delta$ for some input parameter $\delta \in (0,1)$. Our key result is a construction of neural timer with $O(\log\log 1/\delta)$ spiking neurons. Interestingly, this construction uses only \emph{one} spiking neuron, while the remaining neurons can be deterministic threshold gates. We complement this construction with a matching lower bound of $\Omega(\min\{\log\log 1/\delta, \log t\})$ neurons. This provides the first separation between deterministic and randomized constructions in the setting of spiking neural networks.
Finally, we demonstrate the usefulness of compressed counting networks for \emph{synchronizing} neural networks. In the spirit of distributed synchronizers [Awerbuch-Peleg, FOCS'90], we provide a general transformation (or simulation) that can take any synchronized network solution and simulate it in an asynchronous setting (where edges have adversarial response latency) while incurring a small overhead w.r.t the number of neurons and computation time.
13:20 - 13:40 - Lili Su, Chia-Jung Chang and Nancy Lynch. [slides] [pdf]
Title: Spike-Based Winner-Take-All Computation: Fundamental Limits and Order-Optimal Circuits.
Abstract:
Winner-Take-All (WTA) refers to the neural operation that selects a (typically small) group of neurons from a large neuron pool. It is conjectured to underlie many of the brain's fundamental computational abilities. However, not much is known about the robustness of a spike-based WTA network to the inherent randomness of the input spike trains. In this work, we consider a spike-based $k$--WTA model wherein $n$ randomly generated input spike trains compete with each other based on their underlying statistics, and $k$ winners are supposed to be selected. We slot the time evenly with each time slot of length $1\, ms$, and model the $n$ input spike trains as $n$ independent Bernoulli processes.
The Bernoulli process is a good approximation of the popular Poisson process but is more biologically relevant as it takes the refractory periods into account. Due to the randomness in the input spike trains, no circuits can guarantee to successfully select the correct winners in finite time. We focus on analytically characterizing the minimal amount of time needed so that a target minimax decision accuracy (success probability) can be reached.
We first derive an information-theoretic lower bound on the decision time. We then design a simple WTA circuit whose decision time, for any fixed $\delta \in (0,1)$, is order-optimal in terms of its scaling in the network size $n$, the number of true winners $k$, and the task complexity $T_{\mathcal{R}}$.
13:40 - 13:50 - Mien Brabeeba Wang and Nancy Lynch. [paper]
Title: Integrating Temporal Information to Spatial Information in a Neural Circuit.
Abstract:
In this paper, we consider a network of spiking neurons with a deterministic synchronous firing rule at discrete time. We propose three problems -- ``first consecutive spikes counting", ``total spikes counting" and ``$k$-spikes temporal to spatial encoding" -- to model how brains extract temporal information into spatial information from different neural codings. For a max input length $T$, we design three networks that solve these three problems with matching lower bounds in both time $O(T)$ and number of neurons $O(\log T)$ in all three questions.
13:50 - 14:00 - Jorge Lobo and Nava Rubin. [slides]
Title: Continual Boltzmann Sampling of Approximate Solutions to NP-hard Optimization Problems.
Abstract:
We start by briefly discussing in what ways our work is related to biology, and it what ways it is not. The relation to biology comes from the basic component we use to build our compound, solution-seeking network. This basic element, which we call PCRN for Probabilistic Competitive Recurrent Network, was developed in the context of perceptual bi-stability, where it provides a unifying theoretical explanation to a wide range of experimental findings. At the same time, also it adheres to well-established observations about the brain in terms of its population-scale connectivity (architecture) and the single-neuron update rule (dynamics) so that, taken together, it provides a simple, yet biologically plausible model which explain a rich set of behaviors.
However, in the context of the project we present here, we are using the PCRN as a building block for networks that solve a variety of optimization problems -- including one which humans are not particularly good at solving, compared to computer algorithms (eg, scheduling) -- and in that sense we are therefore departing from the biology. That said, %most of what we will be presenting for the results we present below will are using the TSP (traveling salesman problem), which has the advantage that it is easy for the human eye to find good solutions even for moderately large problem-sizes. We do this so that the reader can grasp the results easily, but note that from the point of view of the formalism, it can be applied to other optimization problems by replacing the portions (`layers') of network connectivity which embeds the `soft' constraints (distance) of a TSP with their equivalent in other problems (eg, time-preferences in the scheduling problem), and similarly for the `hard' constraints.
The way that we will be using the system is by taking advantage of the fact that a the core, a PCRN is `happy' to alternate between the competing populations of neurons that are being potentiated, rather to just go to a set of `winner' populations and settle there. Moreover, thanks to the simplicity of the PCRN connectivity scheme, it can be shown that the network divides its time between the competing solutions following the Boltzmann distribution.
14:00 - 14:30 - Joel Levine, U Toronto [Invited] [slides]
Title: A little help on the fly, please?
14:30 - 14:40 - Break & Catch up
14:40 - 15:00 - Yuval Emek, Shay Kutten, Ron Lavi and William K. Moses Jr. [slides] [pdf]
Title: Deterministic Leader Election in Programmable Matter.
Abstract:
The notion of programmable matter was introduced by Toffoli and Margolus and its main purpose was to provide a conceptual way to model matter that could change its physical properties in some programmable way. This model envisioned matter as a group of individual particles interacting with each other subject to some constraints. Individually, each particle was a single computational entity. But when taken together, they produced matter that could be programmed to act in specific desired ways. In the context of this framework, it becomes possible to address what computational problems can be solved by such groups of particles and how efficiently they can be solved.
A concrete way to formalize this notion is the amoebot model, first proposed by Derakhshandeh et al. Amoebots (or particles) represent finite memory mobile agents that move in a manner inspired by amoeba and must stay connected at all times. Addressing a fundamental problem in the amoebot model of programmable matter, we present the first deterministic algorithm to elect a unique leader in a system of connected particles assuming only that particles are initially contracted. Previous algorithms either used randomization, made various assumptions (shapes with no holes, or known shared chirality), or elected several co-leaders in some cases. The main idea of the new algorithm is the usage of the ability of the particles to move, which previous leader election algorithms have not used.
Some of the building blocks we introduce in constructing the algorithm are of interest by themselves, especially the procedure we present for reaching common chirality among the particles. Given the leader election and the chirality agreement building block, it is known that various tasks in programmable matter can be performed or improved.
This talk is based on work to appear in ICALP 2019.
15:00 - 15:20 - Yehuda Afek, Yuval Emek and Noa Kolikant. [slides]
Title: Selecting a Leader in a Network of Finite State Machines.
Abstract:
This paper studies a variant of the \emph{leader election} problem under the
\emph{stone age} model (Emek and Wattenhofer, PODC 2013) that considers a
network of $n$ randomized finite automata with very weak communication
capabilities (a multi-frequency asynchronous generalization of the
\emph{beeping} model's communication scheme).
Since solving the classic leader election problem is impossible even in more
powerful models, we consider a relaxed variant, referred to as
\emph{$k$-leader selection}, in which a leader should be selected out of at
most $k$ initial candidates.
Our main contribution is an algorithm that solves $k$-leader selection for a
bounded $k$ in the aforementioned stone age model.
On (general topology) graphs of diameter $D$, this algorithm runs in
$\tilde{O}(D)$ time and succeeds with high probability.
The assumption that $k$ is bounded turns out to be unavoidable:
we prove that if
$k = \omega (1)$,
then no algorithm in this model can solve $k$-leader selection with a
(positive) constant probability of success.
15:20 - 15:30 - Kristian Hinnenthal, Christian Scheideler and Dorian Rudolph. [slides]
Title: Fast Shape Formation with Hybrid Programmable Matter.
Abstract:
We consider the problem of shape formation with hybrid programmable matter, where we are given a set of robots that act on hexagonal tiles, and aim at transforming the structure of tiles into a certain shape. As we envision programmable matter to consist of nano-scale elements, the robots only have very limited computational capabilites. The problem has already been studied in a setting in which there is only a single robot [Gmyr et al., DNA 2018], but so far, there has been no progress in the direction of a multi-robot solution that provably improves the runtime for shape formation. In this work, we present our development towards an algorithm with a guaranteed runtime of $O(nD/k)$, where $n$ is the number of tiles, $D$ is the initial diameter of the structure, and $k \le \sqrt{n}$ is the number of robots, which improves the runtime of the single-robot algorithm by a factor linear in $k$.
15:30 - 16:00 - Coffee Break 2
Grab Bag
16:00 - 16:20 - John Erickson, Abhinav Aggarwal and Melanie Moses. [slides] [pdf]
Title: On the Minimal Set of Inputs Required for Efficient Neuro-Evolved Foraging.
Abstract:
In this paper, we perform an ablation study of NEATFA, a neuro-evolved foraging algorithm that has recently been shown to forage efficiently under different resource distributions. Through selective disabling of input signals, we identify a sufficiently minimal set of input features that contribute the most towards determining search trajectories which favor high resource collection rates. Our experiments reveal that, independent of how the resources are distributed in the arena, the signals involved in imparting the controller the ability to switch from searching of resources to transporting them back to the nest are the most critical. Additionally, we find that pheromones play a key role in boosting performance of the controller by providing signals for informed locomotion in search for unforaged resources.
16:20 - 16:30 - Alberto Ancona, Ayesha Bajwa, Nancy Lynch and Frederik Mallmann-Trenn. [slides] [paper]
Title: How to Color a French Flag: Biologically Inspired Algorithms for Scale-Invariant Patterning.
Abstract:
In the French flag problem, initially uncolored cells on a grid must differentiate to become blue, white or red. The goal is for the cells to color the grid as a French flag, i.e., a three-colored triband, in a distributed manner. To solve a generalized version of the problem in a distributed computational setting, we consider two models: a biologically-inspired version that relies on morphogens (diffusing proteins acting as chemical signals) and a more abstract version based on reliable message passing between cellular agents.
Much of developmental biology research has focused on concentration-based approaches using morphogens, since morphogen gradients are thought to be an underlying mechanism in tissue patterning. We show that both our model types easily achieve a French ribbon - a French flag in the 1D case. However, extending the ribbon to the 2D flag in the concentration model is somewhat difficult unless each agent has additional positional information. Assuming that cells are are identical, it is impossible to achieve a French flag or even a close approximation. In contrast, using a message-based approach in the 2D case only requires assuming that agents can be represented as constant size state machines.
We hope that our insights may lay some groundwork for what kind of message passing abstractions or guarantees, if any, may be useful in analogy to cells communicating at long and short distances to solve patterning problems. In addition, we hope that our models and findings may be of interest in the design of nano-robots.
16:30 - 17:00 - Danny Forger, U Michigan [Invited] [slides]
Title: Large-scale neuronal modeling on GPUs
17:00 - 17:10 - Abhinav Aggarwal, William Vining, Diksha Gupta, Jared Saia and Melanie Moses. [slides] [pdf]
Title: A Most Irrational Foraging Algorithm.
Abstract:
We present a foraging algorithm, GoldenFA, in which search direction is chosen based on the Golden Ratio. We show both theoretically and empirically that GoldenFA is more efficient for a single searcher than a comparable algorithm where search direction is chosen uniformly at random. Moreover, we give a variant of our algorithm that parallelizes linearly with the number of searchers.
17:10 - 17:20 - Shlomi Dolev, Ram Prasadh Narayanan and Christian Scheideler. [slides]
Title: Self Synchronized Radio Transmission of Nanorobots.
Abstract:
Synchronization of nanorobots can be inspired from nature. One similarity which exists in every model of synchronization is that the process involves a finite number of devices initially exhibiting arbitrarily shifted oscillations (timing signals), and eventually approaching a zero noise scenario by superimposing a coherency on themselves and on their neighbors through a shared medium (electromagnetic (EM) or acoustic field, molecular sensing and eventual change of state, etc). We intend to discuss two propositions concerning the synchronization of nanorobots (also called particles) inspired by pulsating impulses exhibited by nature (fireflies and honeybee combs). Assuming arbitrary incoherency (initially) for all particles in the medium, it can be shown that coherency improves over time within each subset of particles due to stronger and mutual influence of one particle on the others. Subsequently, within a finite time bound, a leader particle emerges in a sense that all particles in the system synchronize to the oscillation frequency of the leader, thereby creating a strong coherent pulsating (radio) signal. As the particles are assumed to be identical in geometry and physical properties, the convergence time depends on their distribution, energy, and the way they interact.
Call for presentations
We solicit submissions of extended abstracts describing recent results
relevant to biological distributed computing. We especially welcome extended
abstracts describing new insights and / or case studies regarding the
relationship between distributed computing and biological systems even if
these are not fully formed. Since a major goal of the workshop is to explore
new directions and approaches, we especially encourage the submission of
ongoing work. Selected contributors would be asked to present, discuss and
defend their work at the workshop. Submissions should be in PDF and
include title, author information, and a 4-page extended abstract. Shorter
submissions are also welcome.
Please use the following EasyChair submission link:
https://easychair.org/conferences/?conf=bda2019
Note: The workshop will not include published proceedings. In particular,
we welcome submissions of extended abstracts describing work that has appeared or is
expected to appear in other venues.
Important Dates:
May 6, 2019 - Extended abstract submission deadline, 23:59 AoE time
May 29, 2019 - Decision notifications
July 29, 2019 - Workshop
Program Committee
Ziv Bar-Joseph - CMU
Anna Dornhaus - University of Arizona
Ila Fiete - MIT
Amos Korman - CNRS and University of Paris Diderot
Nancy Lynch - MIT
Melanie Moses - UNM
Calvin Newport - Georgetown
Merav Parter - Weizmann Institute
Ted Pavlic - ASU
Andrea Richa - ASU
PC Chairs
Yuval Emek - Technion
Saket Navlakha - Salk Institute