Step-by-Step Troubleshooting Guide¶
Occasionally you may find that you have tests that fail in Solano CI but that you have trouble reproducing the failure locally. Typically this is due to one of a small number of reasons:
1. Ordering: Solano CI runs your tests in a order different from the order you use locally. If a test leaves some part of the environment in an unexpected state you may see order-dependent failures. A common example is database state that is not always cleaned up between tests.
2. Network topology: Solano CI may have a different path to network resources. This can affect the loading of external assets, for instance making the load process take longer and thus causing tests to time out.
In some cases, the DNS resolver configuration may differ from the
one expected by your application. This can result in resolution of
localhost to an IPv6 address instead of an IPv4 address, or vice
versa. The best resolution of this issue is to accept either type of
address for local addresses. If that is not possible, another solution
is to use an IP address explicitly in your confiugration.
3. Clean environment: Solano CI starts with a fresh environment every time; if you have left over state locally this can cause a different behavior. Common examples are pre-compiled assets and databases that are already partially migrated locally.
4. Database sort order: We often see tests that intend to test for set equality for the values returned by a database query but actually test for equality between ordered lists. One simple solution is to sort both lists first.
5. Code load order: For languages such as ruby, the order in which files are loaded matters. Particularly if you develop on Mac OS X which gaurantees that directory listings are alphebatized you may find different behavior under Unix in Solano CI since Unix filesystems do not guarantee order.
What follows are a few common techniques to narrow down the failure and identify the underlying problem.
Step 1. Isolate The Test¶
Run the test by itself in Solano CI by supplying a test pattern to
For example, if your spec/models/foo_spec.rb is failing:
$ solano run spec/models/foo_spec.rb
If the test fails when it runs by itself, it’s most likely encountering a difference between Solano CI’s environment and your workstation’s.
The most common subtle issues are due to ordering of results from:
- Database queries
- Hash lookups
- Dir globbing in ruby (Mac OS X gaurantees glob order; other OSes don’t)
For example, say you have a ruby application with a Post model with a comments association. You might write a test like this:
post.comments << comment1 post.comments << comment2 post.comments == [comment1, comment2]
Unfortunately, relational databases make no guarantees about result ordering, unless you specifically set an order.
A better way to write this test (in RSpec) is:
post.comments.should include(comment1, comment2)
Step 2. Check for Ordering Issues¶
In many cases, tests depend on the order in which they run. For example, testA creates a User record that testB expects to exist. Another common case is that testB deletes a seed/fixture record, and normally runs last. Solano CI may run testB before testA, or in parallel with testA, so testA or testB may fail because they are not order-independent.
To help identify these ordering issues, Solano CI displays the full sequence of tests that led to a particular result.
In most cases, it’s enough to see just a few tests around the failure. To do so, expand the test result detail pane for the failing test (click the arrow). In the detail pane, you’ll see tests run before and after the one in question:
Copy and paste from the text box, and run these tests locally on your workstation.
$ bundle exec cucumber features/testB.feature features/testA.feature
There are some cases where the local ordering isn’t enough to shake out the problem, and you need the complete sequence of tests run in a worker from the beginning of the session. That sequence can be quite large, so we offer it for download as a list in plain text:
Step 3. Check for Concurrency Issues¶
Solano CI runs your tests in parallel with each other. Most applications will not notice.
First, run a handful of tests that includes the failing test in serial,
either using parameters to
solano run, or by specifying a
serialize list in
config/solano.yml). Here’s an
example with the command line:
$ solano run --test-pattern='spec/models/*_spec.rb' --max-parallelism=1
Then, run the same handful of tests in parallel:
$ solano run --test-pattern='spec/models/*_spec.rb'
Running with and without
--max-parallelism can help to tease out
parallelism problems; it does not guarantee ordering of tests.
Step 4. Bring Up A Debug Console¶
You can start a debug console from the “Actions” menu on the report page. This will start an environment that you can log into remotely that replicates the build environment. We recommend using it to:
- Identify environment differences from your local workstation
- Replicate test runtime ordering problems
- Watch browser integration tests over VNC to identify timing and other integration test problems.
If the test passes single-threaded, but fails when it runs in parallel with other tests, it’s probably due to the tests or the app relying on shared state in the system. Head over to our Parallelism And Intermittent Failures debugging guide.
Other Known Issues¶
- PaperClip requires a path to ImageMagick binaries. See Paperclip on how to set this path to be set to match Solano CI’s worker environment.
- The DatabaseCleaner :truncation strategy can delete your seed data if you’re not careful. Read Missing Seed Data for information on how to recover.
- Read how to fix Cucumber Undefined Step problems.
- The aruba gem for driving CLIs with cucumber requires special setup. Read Testing Command Line Programs With Aruba for detailed instructions.
- If you are using Ruby on Rails, you may need to precompile your assets before the build starts using a post_setup hook. Locally your assets will likely be precompiled but in a fresh checkout they may need to be precompiled instead of being compiled on the fly, particularly if the compilation is slow.