Skip to content

Troubleshooting

Unable to resolve Triplequote artifacts

All Triplequote artifacts are hosted on our public artifact repository. If you have followed the installation, there are a number of reasons that can explain why dependency resolution might be failing:

Overridden resolvers

If in your build you are overriding the set resolvers (i.e., grepping for resolvers := returns a hit), then you should either change resolvers := with resolvers ++= if you didn't intend to override the resolvers, or add the Triplequote Maven resolver to the list of resolvers:

resolvers += Seq(
  "Triplequote Maven Releases" at "https://repo.triplequote.com/artifactory/libs-release/",
  // your other resolvers
)

Warning

There are two repositories involved: the Maven repository holding all Hydra artifacts (the one above), and the Sbt plugin repository (holding only the Sbt plugin). The Sbt plugin repository is added to project/plugins.sbt, while the Maven repository is added to your own build.

Behind a Proxy?

If you are behind a proxy, make sure you have read the sbt instructions to work with a proxy.

Error activating license

License activation requires access to https://activation.triplequote.com and an Oracle JDK. If you see an error activating the license, make sure you are using the Oracle JDK and that you can connect to the activation server. In case your organization can't allow outside connections, contact support@triplequote.com for discussing an on-premise solution.

Using Coursier

If you are using Coursier, start by checking if dependency resolution succeeds if you remove the Coursier sbt plugin. This will help you pinpoint the problem. If it's indeed working without Coursier, then the checking what version of Coursier you are using.

coursier.ResolutionException

This error usually happens if using an older version of Coursier. Upgrading Coursier to version 1.0.0 or later should solve the problem.

ClassNotFoundException while running the compiler

If the Hydra compiler fails with a ClassNotFoundException it may be that one of your projects has an exclusionRule that matches one of Hydra's dependencies. A prime example is logback. Hydra dependencies are resolved using a hidden Ivy configuration, but by default exclusion rules match all configurations. Make sure you restrict such exclusion rules to the configurations you need, typically Compile and Test. Note that the fix to apply depends on the used version of sbt.

For sbt 1:

excludeDependencies ++= Seq(
  ExclusionRule("ch.qos.logback", "logback-classic", "jar", Vector(Compile, Test), CrossVersion.binary)
)

For sbt 0.13:

excludeDependencies ++= Seq(
  SbtExclusionRule("ch.qos.logback", "logback-classic", "jar", Vector(Compile.name, Test.name), CrossVersion.binary)
)

Installed Hydra in the Global sbt plugins?

Generally we recommend against installing Hydra in the global Sbt plugins. Reasons are twofold:

  • you may not be able to activate your license (see below) with Sbt 1.0
  • you might have projects with a Scala version that we don't support

When Hydra is installed in the global Sbt plugins directory it will be used to compile the meta-project (your project build) on Sbt 1.0. For that to succeed you need an activated license, and without one you won't get to the Sbt prompt, where you can type hydraActivateLicense. If you go down this route make sure you first install Hydra in a single project, activate the license, and then move Hydra to the global Sbt plugin.

The best practice is to create a project/plugins.sbt as described in the installation notes for each sbt project. If only a part of your team has access to Hydra, create a project/hydra.sbt file and move there the sbt-hydra plugin declaration and required resolver. Then, add this file to the set of files ignored by your version control (e.g., if you are using git, simply add the entry in the project's .gitignore).

sbt fails to load

If after adding Hydra to your project sbt fails to load, it is possible that you are using an unsupported version of sbt. The exception you see should be similar to the following:

at com.triplequote.sbt.hydra.HydraPlugin$.projectSettings$lzycompute(HydraPlugin.scala:106)
at com.triplequote.sbt.hydra.HydraPlugin$.projectSettings(HydraPlugin.scala:92)
at sbt.Load$$anonfun$autoPluginSettings$1$1.apply(Load.scala:666)
at sbt.Load$$anonfun$autoPluginSettings$1$1.apply(Load.scala:666)

To resolve the issue, upgrade the sbt version used in your project to version 0.13.13 or later (read the sbt documentation for how to specify the sbt version).

Hydra fails with OutOfMemoryException

Hydra has slightly higher memory requirements than regular Scala, so make sure you keep an eye on the JVM behavior. See tuning memory for how to make the sbt memory configuration part of your project.

Hydra fails with StackOverflowError

Hydra has slightly higher memory requirements than regular Scala, and stack space is no exception. Usually this goes away by simply increasing the stack memory size by passing -Xss64m (adjust the value according to your needs) to the running JVM. There are different ways to do this based on your build tool, see tuning memory for more details.

Missing required compiler argument: -sourcepath

If the error Missing required compiler argument: -sourcepath is reported to you on compile, chances are that you need to add HydraPlugin.hydraConfigSettings to your project's settings. For instance, this is needed if you are using the IntegrationTest configuration, as you can see here. The same applies for any other sbt configuration different from Compile or Test.

Invalid restriction: adding a node to an idle system must be allowed

Sbt shows this error if it can't schedule a task because of concurrentRestrictions, even though no other tasks are running at that moment. This can happen when increasing hydraWorkers beyond the default value without also increasing the limit for Tags.Hydra in concurrentRestrictions. The workaround is to increase the limit for Hydra tasks to be at least as large as the number of workers you wish to use:

concurrentRestrictions in Global := Seq(
  ... // your custom restrictions
  Tags.limit(HydraTag, numWorkers)
)

Missing scala-compiler.jar

If the error Missing scala-compiler.jar is reported to you on compile, it is likely that you are using the := operator instead of ++= (or += for a single element) for declaring the libraryDependencies of the project that fails to compile. To resolve the issue, just replace all occurrences of libraryDependencies := with libraryDependencies ++= Seq(/*...*/).

Cannot find a value of type: [mypkg.MyType] (MacWire library)

If you are using the MacWire library for dependency injection, and Hydra fails to compile and reports an error similar to the following:

[error] frontend/identity/app/conf/IdentityConfigurationComponents.scala:10: Cannot find a value of type: [com.gu.identity.cookie.IdentityKeys]
[error]   lazy val frontendIdentityCookieDecoder = wire[FrontendIdentityCookieDecoder]
[error]                                                ^

The problem can be fixed by upgrading MacWire to version 2.2.4 or later.

Using Enumeratum: knownDirectSubclasses of ... observed before subclass .. registered

Enumeratum, especially when combined with Circe may exhibit a known bug in the Scala compiler that leads to errors like the following:

[error] ..\src\main\Foo.scala:16: Cannot materialize pickler for non-case class: ... If this is a collection, the error can refer to the class inside.
[error]   implicit val p = PicklerGenerator.generatePickler[Foo]
[error]                                                                ^
[error] knownDirectSubclasses of BaseFoo observed before subclass Foo registered

This error stems from the Scala compiler and the way macros access some internal data structures of the Scala compiler. It can always be reproduced using the vanilla Scala and incremental compilation.

For a detailed treatement of this problem please check the Enumeratum issue tracker discussion.

The only known workaround is to import the enumeration (leaf class) members before the picklers are generated. For example, add this import:

// example enumeration
sealed trait Resource extends EnumEntry
object Resource extends Enum[Resource] {
  val values = findValues

  case object SomeResource extends Resource
}

// add this import in files that use the enumeration
import entities.Resource.SomeResource._

Sangria compilation error: "Field list is empty"

[info] Compiling 4 Scala sources to /Users/mirco/tmp/sangria-bug/target/scala-2.12/classes ...
[error] /Users/mirco/tmp/sangria-bug/src/main/scala/bug/Derivation.scala:6:68: Field list is empty
[error]   val MutationType = deriveContextObjectType[MyCtx, Mutation, Unit](_.mutation)

This compilation error in Sangria is due to a bug (sangria#172) in the sangria library. A fix for this issue will be available in sangria 1.3.4, which is planned to be released in mid-end January/beginning February 2018.

Misplaced package object warnings

A package object should be defined in a filename called package.scala and follow the directory convention. If this convention is not respected, Hydra issues a warning to avoid misbehavior and surprising compiler errors. Why? Because package object are known to be fragile (see the many open issues on the Scala issue tracker, and their fragility is somewhat exacerbated by Hydra as it compiles sources in parallel. However, it is worth noting that all problematic interactions of Hydra with package objects can be reproduced with vanilla Scala and the sbt incremental compiler. Therefore, by sticking to the recommended conventions you can avoid all these problems altogether.