Week 7: Events, Exhaustion and Evaluations

Hi all,

This week has probably been the toughest one yet. Not just mentally, but also in terms of hours spent chasing bugs that I had no idea about. I’ve logged over 50 hours on the project, and ironically, there’s not much “visible” progress to show. But deep down, I know this is part of the process, and what matters is that I’ve been able to get to the root of a tricky problem, more on that below.


Updates From This Week

  • Made the PR changes suggested by Dr. Dräger and Taichi.

  • Ran the SBMLTestSuite using the wrapper program I wrote last week. I had already shared the test results with the team on Slack.

  • Results? Mixed bag:

    • Passed: ~85% of the test cases (Happy somewhat!)

    • Wrong Output: ~12–13%

    • Error: ~2% (No idea, why?)

    • Results Map from SBMLTestRunner



So naturally, I spent the entire week narrowing down those failed and wrong output cases. And guess what? I’ve finally figured out what's going wrong in most of the wrong output cases.

The Issue: Event Timing & Fixed Step Size

Most of the failed test cases involve EventNoDelay or EventWithDelay compartment tags, means tests involving Events. After digging in, here's what's happening:

Let's take a scenario:
Suppose a reaction converts S1 → S2, and there’s an event defined:
When [S1] < 0.2 → set [S1] = 1

Now, LSODA (the solver) is designed as a “black‐box” integrator: it hides all of the step‑size adaptation, error estimation, and Jacobian updates inside its own driver routine. It DOESN’T expose a separate “step(…) or errorEstimate” call like the RosenbrockSolver does. Instead:

  1. You call LSODA once with your desired output time tout = t + h, and

  2. It returns a flag (≥ 0 on success, < 0 on failure) and internally adjusts its step‐size (and even method order) to meet your tolerances.

Now, since it uses a fixed time step (say 0.1) specified using the setStepSize() method in the DESolver interface and computes results at those discrete points. If the actual condition [S1] < 0.2 is triggered between these steps (say at t = 2.234), the solver will miss it because it only calculates at t = 2.2 and 2.3. The event might only be detected at t = 2.3, but by then, we’ve already progressed based on wrong intermediate values — causing a ripple effect that skews all further computations.

Here's a real example from SBML Test Case 00026:
The curve generated by LSODA is almost identical to the expected result. But right at the event trigger point, there’s a slight deviation — this minor shift cascades into wrong species values from that point forward.


I also verified the same test with DormandPrince853Solver (another solver in SBSCL), and… same issue. So the bug’s not in the LSODA implementation itself — it’s more of a fundamental challenge around event detection in fixed-step numerical solvers

For those curious:

“An event in SBML is a set of assignments executed when a trigger switches from false → true. It may be immediate or delayed, and handling its exact timing is crucial for correct numerical simulation results.”

This whole experience highlights how step-size adaptation is essential when dealing with models containing events — otherwise, we might completely miss triggers that occur between steps.  


Meeting Minutes and What Next?

We discussed this in our weekly meeting. Here's what came out of it:

  • Taichi suggested I open a GitHub issue detailing the problem and guidelines to reproduce it. So that He and Arthur can take a deeper look into it.

  • I'll also post about this in Slack to keep Dr. Dräger and Dr. Funahashi in the loop — they couldn’t attend this week's meeting due to personal reasons.

  • Dr. Dräger later sent me some additional resources on this topic — and they look really promising and supports my reasoning of the failing test cases. Will talk about them in the next blog post after I dive in.


Personal Notes

Honestly, this week felt like running on a treadmill — a lot of effort, but staying at the same place. I was completely drained and at one point had no idea what is happening.

But just then, I got a message from Dr. Dräger saying I’ve passed the mentor midterm evaluations! (Yayy! ✌)
That honestly made my day. I really needed a little win to keep going. I’ve also submitted my contributor evaluation (obviously wrote great things — everyone’s been incredible mentors!). Should I share the feedback here? I don't know. Let me know in the comments.


Coming Up

  • Create a GitHub issue on the problem

  • Start diving into the resources sent by Dr. Dräger

  • Try potential workarounds or strategies for handling mid-step events

  • And of course, keep pushing forward

Till next week — where I hope I’ll be writing with more resolved bugs and less mental fog. 😄

Thank you for following. Bye!👋

Comments

Popular posts from this blog

My GSoC 2025 Journey Begins

Week 1: Coding Period Begins