• John
  • Felde
  • University of Maryland
  • USA

Latest Posts

  • USLHC
  • USLHC
  • USA

  • James
  • Doherty
  • Open University
  • United Kingdom

Latest Posts

  • Andrea
  • Signori
  • Nikhef
  • Netherlands

Latest Posts

  • CERN
  • Geneva
  • Switzerland

Latest Posts

  • Aidan
  • Randle-Conde
  • Université Libre de Bruxelles
  • Belgium

Latest Posts

  • TRIUMF
  • Vancouver, BC
  • Canada

Latest Posts

  • Laura
  • Gladstone
  • MIT
  • USA

Latest Posts

  • Steven
  • Goldfarb
  • University of Michigan

Latest Posts

  • Fermilab
  • Batavia, IL
  • USA

Latest Posts

  • Seth
  • Zenz
  • Imperial College London
  • UK

Latest Posts

  • Nhan
  • Tran
  • Fermilab
  • USA

Latest Posts

  • Alex
  • Millar
  • University of Melbourne
  • Australia

Latest Posts

  • Ken
  • Bloom
  • USLHC
  • USA

Latest Posts


Warning: file_put_contents(/srv/bindings/215f6720ac674a2d94a96e55caf4a892/code/wp-content/uploads/cache.dat): failed to open stream: No such file or directory in /home/customer/www/quantumdiaries.org/releases/3/web/wp-content/plugins/quantum_diaries_user_pics_header/quantum_diaries_user_pics_header.php on line 170

Posts Tagged ‘trigger’

We’ll deal with that later…

Monday, April 30th, 2012

In my last post, I described the different LHC collision setup at LHCb this year. Today, I thought I would describe the different LHCb trigger setup.

Now what is the LHCb trigger, I hear you all ask? I actually wrote a post on the topic last year, which I invite you all to read for details, here I’m just going to explain the details I need to describe the changes.

The LHCb trigger is an online electronic system that selects which collision events will be written to disk for offline analysis. On the right here, I have a schematic of the system. It consists of two levels; the first is made up of custom electronics, called L0, while the second is a computer farm, called HLT.

We call it an online system, as it runs in real time. As fast as collision events are coming in, the L0 electronics decides whether to reject an event or send it to the HLT. The HLT gets a little more time to make a decision, but it still needs to be pretty fast. However, sometimes it can’t handle all the events that the L0 is feeding it, and we lose events as the buffers fill up.

 

This situation is what our new trigger setup is designed to avoid. How are we going to do this? It was noticed that the HLT computing farm sits idle when there aren’t any collisions. So somebody came up with the clever idea to buffer events locally on the farm nodes and defer processing them until after the current collision period[*]. Thus the trigger now looks something like the schematic on the left[**].

This means we can record even more data!

——————————————————————————–

[*] The LHC doesn’t collide protons continuously, there’s a cycle in which protons are injected, accelerated, collided, ejected and the machine prepared for the next injection. Ideally, most of the time would be spent in collisions (in LHC speak: stable beams), but this isn’t always possible or viable.

[**] I have obviously simplified how the deferred HLT works. Like most simple ideas, it was quite complicated in practice. There were a lot of technicalities to consider, like how many events to store in the overflow, or what to do if the overflow became full, or how to avoid the scenario where we’re still processing deferred events when the next collision period starts…

Share

Physicists did a lot of planning for data analysis before the LHC ever ran, and we’ve put together a huge number of analyses since it started. We’ve already looked for most of the things we’ll ever look for. Of course, many of the things we’ve been looked for haven’t shown up yet; in fact, in many cases including the Higgs, we didn’t expect them to show up yet! We’ll have to repeat the analysis on more data. But that’s got to be easier than it was to collect and analyze the data the first time, right? Well, not necessarily. We always hope it will be easier the second or third time around, but the truth is that updating an analysis is a lot more complicated than just putting more numbers into a spreadsheet.

For starters, every time we add new data, it was collected under different conditions. For example, going from 2011 to 2012, the LHC beam energy will be increasing. The number of collisions per crossing will be larger too, and that means the triggers we use to collect our data are changing too. All our calculations of what the pileup on top of each interesting collision looks like will change. Some of our detectors might work better as we fix glitches, or they might work worse as they are damaged in the course of running. All these details affect the calculations for the analysis and the optimal way to put the data together.

But even if we were running on completely stable conditions, there are other reasons an analysis has to be updated as you collect more data. When you have more events to look at, you might be interested in limiting the events you look at to those you understand best. (In other words, if an analysis was previously limited by statistical uncertainties, as those shrink, you want to get rid of your largest systematic uncertainties.) To get all the power out of the new data you’ve got, you might have to study new classes of events, or get a better understanding of questions where your understanding was “good enough.”

So analyzing LHC data is really an iterative process. Collecting more data is always presenting new challenges and new opportunities that require understanding things better than before. No analysis is ever the same twice.

Share

Can the LHC Run Too Well?

Friday, February 3rd, 2012

For CMS data analysis, winter is a time of multitasking. On the one hand, we are rushing to finish our analyses for the winter conferences in February and March, or to finalize the papers on analyses we presented in December. On the other, we are working to prepare to take data in 2012. Although the final decisions about the LHC running conditions for 2012 haven’t been made yet, we have to be prepared both for an increase in beam energy and an increase in luminosity. For example, the energy might go to 8 TeV center-of-mass, up from last year’s 7. That will make all our events a little more exciting. But it’s the luminosity that determines how many events we get, and thus how much physics we can do in a year. For example, if the Higgs boson exists, the number of Higgs-like events we’ll see will go up, and so will the statistical power with which we can claim to have observed it. If the hints we saw at 125 GeV in December are right, our ability to be sure of its existence this year depends on collecting several times more events in 2012 than we got in 2011.

We’d many more events over 2012 if the LHC simply kept running the way it already was at the end of the year. That’s because for most of the year, the luminosity was increasing over and over as the LHC folks added more proton bunches and focused them better. But we expect that the LHC will do better, starting close to last year’s peak, and then pushing to ever-higher luminosities. The worst-case we are preparing for is perhaps twice as much luminosity as we had at the end of last year.

But wait, why did I say “worst-case”?

Well, actually, it will give us the most interesting events we can get and the best shot at officially finding the Higgs this year. But increased luminosity also gives more events in every bunch crossing, most of which are boring, and most of which get in the way. This makes it a real challenge to prepare for 2012 if you’re working on the trigger, because have to sift quickly through events with more and more extra stuff (called “pileup”). As it happens, that’s exactly what I’m working on.

Let me explain a bit more of the challenge. One of the triggers I’m becoming responsible for is trying to find collisions containing a Higgs decaying to a bottom quark and anti-bottom quark and a W boson decaying to an electron and neutrino. If we just look for an electron — the easiest thing to trigger on — then we get too many events. The easy choice is to ask only for higher-energy electrons, but beyond a certain points we start missing the events we’re looking for! So instead, we ask for the other things in the event: the two jets from the Higgs, and the missing energy from the invisible neutrino. But now, with more and more extra collisions, we have random jets added in, and random fluctuations that contribute to the missing energy. We are more and more likely to get the extra jets and missing energy we ask for even though there isn’t much missing energy or a “Higgs-like” pair of jets in the core event! As a result, the event rate for the trigger we want can become too high.

How do we deal with this? Well, there are a few choices:

1. Increase the amount of momentum required for the electron (again!)
2. Increase the amount of missing energy required
3. Increase the minimum energy of the jets being required
4. Get smarter about how you count jets, by trying to be sure that they come from the main collision rather than one of the extras
5. Check specifically if the jets come from bottom quarks
6. Find some way to allocate more bandwidth to the trigger

There’s a cost for every option. Increasing energies means we lose some events we might have wanted to collect — which means that even though the LHC has produced more Higgs bosons, it’s counterbalanced by us seeing fewer of the ones that were there. Being “smarter” about the jets means more time spent by our trigger processing software on this trigger, when it has lots of other things to look at. Asking for bottom quarks not only takes more processing, it also means the trigger can’t be shared with as many other analyses. And allocating more bandwidth means we’d have to delay processing or cut elsewhere.

And for all the options, there’s simply more work. But we have to deal with the potential for extra collisions as well as we can. In the end, the LHC collecting much more data is really the best-case scenerio.

Share

Searching for gold…

Wednesday, June 29th, 2011

G’day all! Today I will be continuing the Australian theme and discuss panning for gold. Which being completely correct isn’t really Australian, since it was probably practiced in any gold rich area. However in my defense, an integral part of Australian history is the gold rush in the late 19th century and a visit to one of the old gold areas is an excursion most Australian school children take.

For those who are wondering what gold panning is, it’s a method of searching for gold in stream beds using a pan. It doesn’t tend to yield high quantities of the precious metal, but it doesn’t take much equipment and can be used to locate gold rich areas. It requires lots of patience to sit by a stream and slowly separate the dense precious metal from the less dense, less interesting rocks and sand.

What does this have to do with particle physics and LHCb I hear you all ask? Well, gold panning is a fairly good analogy for trying to identify collisions in which B mesons are produced, and from those collisions trying to find the particular B meson decays we are interested in.

To give you some numbers, the rate of collisions at the LHCb interaction point is 40 MHz, of which only about 10 MHz will contain particles which are within the acceptance of the LHCb detector. Events where all the decay products of a B meson can be detected by LHCb have a rate of about 15 kHz, while the rate of specific B meson decays that are interesting for physics analysis is around a few Hz. So we are only interested in approximately one out of ten million collisions that the detector sees per second.

The first level of event selection is performed by an online electronic system, called the trigger, that selects which events will be stored on disk for offline analysis. The trigger is a very important system, since it is not possible to record every event on disk due to limited bandwidth; we must make sure that events containing interesting B meson decays are kept.


Schematically shown above, the LHCb trigger system operates on two levels. The first, called L0, is comprised of custom electronics and uses information from the VELO, the calorimeter, and the muon systems. From the ten million proton collisions that LHCb sees per second, it selects around one million events per second for further processing, while discarding the remaining nine million. The first level trigger works incredibly fast, making its decision in just four millionths of a second.

After filtering by the first level trigger, an overwhelming number of events still remains. These are fed into a farm of over two thousand computers, which make up the HLT, the second level trigger, located deep underground at the LHCb site. These machines select interesting events to save for analysis, further trimming the one million events per second to a more manageable two thousand. This second level trigger uses the full detector information and has more time to make a decision than its first level counterpart.

If you’ve been paying very close attention to all the numbers you might have noticed that we’re writing events to disk at a rate of 2 kHz, while the interesting physics rate is a few Hz. Due to computing resources, it is not possible to analyse the full dataset when the signal to background ratio is so low, so there is second level of event selection, called stripping[*]. The major difference between trigger and stripping is that events which the trigger rejects are lost forever; stripping selections on the other hand, can be rerun if necessary.

Stripping contains a set of preselection algorithms defined by each physics analysis in LHCb, which are run offline after data taking to produce a set of selected events for further individual analysis. The events that pass the defined stripping selection criteria will be fully reconstructed, recreating the full information associated with each event in preparation for detailed analysis.

Returning to the gold panning analogy, we started out with a pan full of generic proton collisions. Triggering removed all the collisions which are obviously not gold, which don’t look like B meson decays at all. Stripping removed what we think isn’t gold, but we put the rejected collisions to the side, just in case they could be gold. With what’s remaining, with a bit more work, hopefully we can find what we are looking for… Gold!

[*] Yes, it is actually called stripping. I’m too new to the experiment to know the history of the term, though I have been privy to enough discussions on the topic that I don’t find it amusing anymore.

Share

Imagine you’re in charge of a budget for a large organization of a few thousand people who are experts in their field.  Imagine that if you don’t spend some of the money in the budget that you can’t keep what you’ve saved- it will be lost forever.  Now imagine that there’s another group of a few thousand experts with exactly the same budget, right down the last penny.

That’s the kind of scenario that we face at the LHC, except the budget is in time and not money.  We count proton collisions and not dollars.  The LHC is delivering world record luminosities right now, and the different experiments are getting as much data as they can.  For LHCb and ALICE there is pressure to perform, but between ATLAS and CMS the competition is cut throat.  They’re literally looking at the same protons and racing for the same discoveries.  Any slight advantage one side can get in terms of data is crucial.

What does any of this have to do with my work at ATLAS?  Well I’m one of the trigger rates experts for pileup.  When we take data we can’t record every proton collision, there are simple too many.  Instead, we pick the interesting events out and save those.  To find the interesting events we use the trigger, and we only record events when the trigger fires.  Even when we exclude most of the uninteresting events we still have more data than we can handle!  To get around this problem we have prescales, which is where we only keep a certain fraction of events.  The trigger is composed of a range of trigger lines, which can be independent of one another, and each trigger line has its own prescale.

A high pileup event at ATLAS

High pileup scenarios. Can you count the vertices? (ATLAS Collaboration)

The term “pileup” refers to the number of proton collisions per bunch crossing (roughly how many interactions we can expect to see when we record an event.)  When I came to ATLAS from BaBar I had to get used to a whole new environment and terminology.  The huge lists of trigger lines alone made my head spin, and so far pileup has been the strangest concept I’ve had to deal with.  Why take a scenario that is already overwhelmingly complicated, with one of the most intricate machines the world, and make it even harder to understand, for the sake of a few more events?  Because we’re in competition with CMS, that’s why, and everything counts.  The image on the right shows a typical event with multiple interactions.  Even counting the number of vertices is difficult!

Balancing the different prescales is where things get interesting, because we have to decide how we’re going to prescale each trigger.  We have to make sure that we take as much data as possible, but also that we don’t over-burden our data taking system.  It’s a fine balancing act and it’s hard to predict.  Our choice of trigger prescales is informed by what the physicists want from the dataset, and what range of types of events will maximize our output.  The details of what kinds of events we want is a very hotly debated topic and one that is best left to a separate blog post!  For now, we’ll assume that the physicists can come up with a set of prescales that match the demands of their desired dataset.  What usually happens then is that the trigger menu experts ask what would happen if things were a little different, if we increased or decreased a certain prescale.

The effects of proton burning on luminosity.

The effects of proton burning on luminosity. (LHC)

We need to pick the right times to change the prescales, and it turns out that as we keep taking data, the luminosity decreases because we lose protons when they interact.  This is known as proton burning and you can see the small but noticeable effect of this the image above.  As we burn more protons we can change the prescales to keep the rate of data-taking high, and that’s where my work comes in.  The rates for different trigger lines depend on pileup in different ways, so understanding how they act in different scenarios allows us to change the prescales in just the right way.  We can make our trigger very versatile, picking up the slack by changing prescales on interesting trigger lines, and pushing our systems to the limit.  My job is to investigate the best way to make these predictions, and use the latest data to do this.  The pileup scenarios change quite rapidly, so keeping up to date is a full time job!  And every second spent working on this means more protons have been burned and more collisions have taken place.

It’s not an easy task, it forces me to think about things I’ve never considered before, and keeps the competition at the forefront of my mind.  I knew I’d be in a race for discovery when I joined ATLAS, but I never realized just how intense it would be.  It’s exciting and a little nerve-wracking.  I don’t want to think about how many protons pass by in the time it takes to write a blog post.  Did we record enough of them?  Probably.  Can we do better?  Almost certainly.  There’s always more space in this budget, and always pressure to stretch it that little bit further.

Share

….. is what we are hoping  to have next Tuesday 🙂  The LHC made it official, and so they will attempt to collide the two proton beams at 3.5 TeV each, on Tuesday March 30.

It’s 01:15am and I just got home after a quite long day of work (although shorter than I expected).  Everything needs to be ready before we get collisions, so the efforts have to double.  As part of the high level trigger team in CMS, my work this week consists in making sure that we are able to accept all the good collision events (data).  After a few days of intensive testing from different groups and people, we hope we will deploy the final version of the trigger “menu” tomorrow, or on Thursday the latest.  The high level trigger is a key component of being able to accept data.  It is basically a collection of code that runs online, live, to discriminate what information is put into tape and what is not.

It is very likely that  we will have lower energy collisions (900 GeV) during the weekend as a preamble for the historic 7 TeV smashings. We also need the trigger to catch beam gas events from 3.5 TeV circulating stable beams (no collisions), maybe on Sunday.

The adrenaline is starting to flow here at CERN.  It is somehow difficult to sleep, thinking about all this, for people like me who are on-call.  Most of the improvements, fixes, upgrades, etc, that we made after the learning experience of last year’s collisions are now in place, and ready for prime time.   We will do just fine.  I am sure.

Edgar F. Carrera (Boston University)

Share

Mountains of data

Friday, October 2nd, 2009

In a previous post, Regina gave an overview of triggers. Let me add to that and give some numbers.

When the LHC is operating at design parameters, we will have collisions every 25 ns, i.e., at a 40 MHz rate (40 million/second). Obviously, we can’t collect data at the rate, so we pick the interesting events, which occur infrequently. A trigger is designed to reject the uninteresting events and keep the interesting ones; your proverbial “needle in the haystack”, as you will see below. The ATLAS trigger system is designed to collect about 200 events per second, where the amount of data collected for each event is expected to be around 1 Mbyte (for comparison, this post corresponds to about 4-5 kilobytes).

Before I get to the numbers of events that we will collect, let me first explain a couple of concepts; cross-section of a particular process and luminosity. Cross-section is jargon; basically, it gives you an estimate of the probability of a certain kind of event happening. Luminosity is a measure of the “intensity” of the beam. The number of events that we collect of a given type is given by the product of Luminosity and Cross-section.

One common kind of interaction is when two protons “glance” off each other, without really breaking up; these are called Elastic Collisions”. Then you have protons colliding and breaking up, and producing “garden-variety” stuff, e.g., pions, kaons, protons, charm quarks, bottom quarks, etc; these are labelled Inelastic Collisions. The sum of all these processes is the “total cross-section”, and is about 100 millibarns, i.e., 1/10th of barn; the concept of a “barn” probably derives from the expression “something is as easy as hitting the side of a barn”! So, a cross-section of 100 millibarns implies a very, very large probability1; for 7 TeV collisions, this cross-section decreases by about 10-20%, i.e., not much.

In contrast, the cross-section for producing a Higgs boson (with mass = 150 GeV, i.e., 150 times the mass of a proton) is 30 picobarns (30*10-12 barns), i.e., approximately 3 billion times less than the “total cross-section” (at 7 TeV, the Higgs cross-section decreases by a factor of four). The cross-section for producing top quarks is about 800 picobarns (at 7 TeV, this is down by a factor of eight). So, you can see the need for a good trigger!

The design parameters imply a luminosity of 1034, i.e., looking head-on at the beam there are 1034 protons/square cm/second. So, taking the product of cross-section and luminosity, we estimate that we will get approximately 109 “junk events”/second and 0.3 Higgs events/second! Of course, there are other interesting events that we would like to collect, e.g., those containing top quarks will come at a rate of 8 Hz. We also record some of the “garden-variety” events, because they are very useful in understanding how the detector is working. So, this is what the trigger does, separate what we want from what we don’t want, and do it all in “real time”.

As mentioned above, we plan to write to disk approximately 200 events per second. If we run the accelerator continuously for a year, we will collect 6*1015 bytes of data, i.e., 6 petabytes; this will fill about 38,000 IPods (ones with 160 GB of storage)! Each event is then passed through the reconstruction software, which will add to its size. We have come up with ways to handle all this data; I can talk about that in a later post.

–Vivek Jain, Indiana University

p.s. For fun facts about ATLAS, check out the new ATLAS pop-up book that is coming out soon! If you are on Facebook, go here. You can also see a video of this book.

1In standard units, 1 barn = 10-24 cm2

Share

PantheónHowdy! LHC fans, and welcome to my US LHC blog area.  It is a great pleasure for me to have this opportunity to communicate what we are up to at the LHC from a more personal perspective.  I hope you enjoy it!!

This is my first post and so I thought it was a good idea to talk about things related to my first HLT (High-level Trigger) expert shift.  It was a few weeks ago when the HLT cell phone rang in my pocket for the first time with a phone-call from the CMS control room.   My plan of attending a French lesson in Geneva later in the evening suddenly changed for a more exciting one. There was a problem with the HLT system and, as the expert on-call, I needed to figure out the nature of the problem very quickly.  Before telling you about the problem, let me try to explain a key feature of our trigger system: regional reconstruction.

Imagine a friend of your identical twin brothers hands you over two pictures of his group of friends. One of them is in front of the Pantheon in Rome and the other one in front of the façade of the Pantheón in Paris. Now, you have never been in either city and you know almost nothing about these historical sites (they look almost the same to you), but he challenges you to identify the cities in one minute. Since you are clever and you know that only one of your twin brothers was able to make it to these two cities at a time, you rapidly identify them and correctly associate the cities. Your friend is very impressed!

Now, imagine the same friend hands you over the same two pictures but made into finely-cut jigsaw puzzles and he challenges you again with the same task. You have only one minute to identify the cities but now the faces of your brothers may not look as identifiable as they were before. However, you remain clever and take the whole minute to try to reconstruct the face of who you think is one of them in one of the photos. You start by a “seed” (a puzzle piece containing your brother’s eye, for instance) and then you try to arrange the pieces around the seed to quickly take a look at his face. The task was not easy because there were many pieces, but you were fast enough to complete one face in one picture and, therefore, identify the cities correctly once again. Your friend is shocked by how smart you are!!

The electronic signals of the millions of independent channels in a modern particle detector are like jigsaw puzzle pieces of a picture of a collision. As there will be billions of them happening every second at the LHC, we use a system called trigger in order to select only the interesting collision “photos”. In CMS, once first beam collisions arrive, we will not have enough time to look at the whole collision photo, hence,  similarly to the analogy above, we put together just a couple of “particle faces” (regional reconstruction) using “puzzle pieces” from one or a few sub-detectors in order to be able to say (or not): “Yes, I recognize this face (I have seen it before), I will be interested in this picture, let’s keep it; we can put all the pieces back together later to see who else was in it and what the background was like (full reconstruction)”.

The problem I had to handle, after receiving the phone call, had to do with the reconstruction of some of the “faces” in our pictures of cosmic rays data taking a very long time and clogging our system.  What makes it very exciting is that, in case of a problem with any subsystem in the detector, an on-call expert needs to react very fast, usually under a lot of pressure.  This is because if we stop taking “photos” for any reason, we might miss the “kodak” moment of a lifetime: a black hole, a Higgs boson, a supersymmetric particle, or any other exotic event.   We use cosmic data to better understand our system and to prepare for beam collisions.

We will be ready at CMS for our ultrafast photography adventure!

Edgar Carrera, Boston University

Share

Can We “Point” the LHC, Too?

Wednesday, January 28th, 2009

The Bad Astronomy blog is publicizing a chance to choose what the Hubble Space Telescope looks at.  The basic idea is that there’s going to be an internet vote between six objects that Hubble has never looked at, and Hubble will be pointed at the winner and send out pictures of it by April.  It seems like a fun way to get the public to learn more about, and feel more involved in, the Hubble project.

I’ll let you read more details at one of the links above, but I have another question to consider: can we do something similar with the LHC? That is, could we put up some kind of page where people could vote on what kind of physics we would study over the course of some particular week?  Maybe a choice between searching for Supersymmetry, or a high-mass Higgs boson, or a low-mass Higgs boson?  At first glance, the answer would seem to be “no.”  We obviously have no control over what kind of physics happens when the protons of the LHC collide — we just look at what comes out.  And it seems unlikely that any physicist would volunteer to put their work hours into a particular analysis because of a public vote, and anyway we’ll have people working on all the high-profile analyses and many low-profile ones besides.

But there actually is a sense in which ATLAS or CMS could to something similar.  Remember that our detectors can only record a few hundred events every second, out of the almost forty million times the beams cross during that second.  There are lots of collisions we have to throw out because we can’t store enough data, and it’s the trigger system that decides which few we keep.  In practice, there are a number of different signals that we program the trigger system to be interested in: we take a certain number of random low-energy events to help us calibrate what we see in our other events, and we have separate “trigger paths” for hadronic jets, for muons, for electrons, and so on.  We try to record all the events that might represent interesting new physics, but as the collision rate at the LHC increases, we’ll have to throw away even some of those.  When the committee meets to decide how to balance the different possible triggers, what is at issue is precisely which kinds of events the detector will “point at,” i.e. recognize as important and save.  People with different interests in terms of physics might make different choices about how to achieve that balance, and every study would always love more trigger bandwidth if it were available, and that’s why we have committees to argue about it in the first place.

So why not reserve 5% of the ATLAS or CMS trigger bandwidth for a public vote on what physics to look for, to give a little extra oomph to one study or another?  Actually, I can think of several good and practical reasons why not — but it’s fun to think about!

Share

Whacking Moles at the LHC

Tuesday, May 20th, 2008

When I was in undergraduate school at UC-Irvine, I lived in a Newport Beach summer rental during the winter, so it was fairly cheap for the area. It was next to the beach, so I could fall asleep to the sound of the ocean. Nearby, there was an entertainment area, Balboa Fun Zone, with an arcade (the area was in an INXS video “Devil Inside”). It was full of video games (late 1980’s) which I am generally bad at. However, it did have Skee-Ball, where you roll a ball into a series of rings, the smallest at the center giving the most points. You collected tickets as you played, and could redeem them for a prize at the end. I loved the Skee-Ball, and would play for quite a while, redeeming my tickets for some useless trinket at the end.

At the same arcade, there was a game called Whac-a-Mole. This consisted of little mole heads that popped up and you hit them back down again (with a mallet that looks like a giant marshmallow on a stick). I tried this once or twice, but it was too close to video games for me. I am not great at the hand-eye coordination exercises.

Today we are doing studies with the trigger again. I am using this period of time to check and see if two fixes I made worked. They seem to have worked, but two more popped up! I was just reminded of this game. I take my (soft) mallet and whack the moles down, and then they just pop up again, somewhere else. I hope when the game is done, and the moles are gone, I get enough tickets to redeem them for a really nice prize.

Share