Giter Site home page Giter Site logo

vulcan's People

Contributors

agustafson avatar ayoub-benali avatar bplommer avatar chr12c avatar erikvanoosten avatar filosganga avatar gafiatulin avatar gcollins-spark avatar georgexcollins avatar jacktreble avatar keirlawson avatar lucaviolanti avatar mberndt123 avatar mergify[bot] avatar msinton avatar nequissimus avatar scala-steward avatar soujiro32167 avatar vlovgr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vulcan's Issues

Remove default "None" value for namespace of named types

I think the default to None value should be removed for a couple of reasons:

  • It's confusing and potentially misleading. The Avro spec for names makes an important but subtle distinction between specifying the null namespace (using the empty string) and not specifying a namespace: in the latter, and only the latter, case, the namespace from the closest enclosing scope is used. On the other hand, a Java Schema object will always specify a (possibly null) namespace. It's counterintuitive that, when the Avro API is invoked without specifying a namespace, the result is a type that specifies the null namespace.
  • Use of the null namespace should be discouraged. Its semantics are confusing, and its implementation has bugs such as this open issue from 2015 that took me two full days to identify as the cause of a decoding failure.

Better still (IMO) would be to make the namespace field non-optional, but allow use of the empty string, as per the Avro spec, to specify the null namespace. This retains the ability to specify the null namespace for those who know what they're doing, while no longer suggesting that it is a sensible default or that it is possible to leave the namespace unspecified.

I'm happy to open a PR for this - thoughts?

Move `deriveEnum` and `deriveFixed` to generic module

These methods aren't straightforward to port to scala 3, and they feel more appropriate for the generic module anyway. As discussed in #239 we should first duplicate/alias them in the generic module and deprecate the core versions, then remove them from core in the next major release.

Add benchmarks

It would be good to keep track of the performance impact of changes like those in #437. Perhaps we can steal adapt (with attribution) ones from Avro or Avro4s to get started.

Derive unwrapped Codec for newtypes

It would be a small, but a nice feature to have vulcan automatically derive "unwrapped" codecs for value classes, similarly to what circe provides via the eneric-extras module, e.g.

final case class EventId(value: String)

Instead of

Codec[String].imap(EventId.apply)(_.value)

we could just write the following to get the codec automatically derived:

Codec.deriveUnwrapped[EventId]

It's slightly shorter and easier to write if you use the generic module.

What do you think?

Localdate codec produces nonsense results where epochDay > Integer.MAX_VALUE

My scalacheck tests threw up an issue with Vulcan's supplied LocalDate codec, namely that Internally the codec gets the epoch day via value.toEpochDay().toInt. However the epoch day could be greater than the maximum possible integer (toEpochDay returns a long) and in this case rather than erroring a surprising value is encoded.

Support for AvroName

#423 first pass, case class derive and derivefixed, some new tests passing

Before I go any further, is this a feature that would be useful? Is this draft PR going in the right direction? As I am not super-familiar with the Vulcan library and its various use cases, what else is left to implement?

Possible bug when producing Avro Unions to a Kafka topic

Hey there,

I found an issue that pertains to producing Avro's union types to Kafka + Confluent Schema Registry using FS2 Kafka with Vulcan. It seems like the coproduct/union information is lost when producing data to Kafka and it only uses the specific subtype/record when registering the schema with Confluent Schema Registry rather than registering the entire union.

I have a union like so:

import vulcan.Codec
import vulcan.generic.*

sealed trait AvroUnionExample {
  def id: Int
}

object AvroUnionExample {
  final case class First(id: Int, name: String, occupation: String)   extends AvroUnionExample
  final case class Second(id: Int, deviceId: String, reading: Double) extends AvroUnionExample
  final case class Third(id: Int, meterId: String, reading: Double)   extends AvroUnionExample

  implicit val avroUnionExampleCodec: Codec[AvroUnionExample] = Codec.derive[AvroUnionExample]
}

This correctly prints out all the union information when pretty-printing the schema. When I try to use it:

import cats.effect.*
import fs2.kafka.*
import fs2.kafka.vulcan.{avroSerializer, AvroSettings, SchemaRegistryClientSettings}
import fs2.*

object KafkaProducerApp extends IOApp {
  def producerSettings[K, V](implicit
      recSerForK: RecordSerializer[IO, K],
      recSerForV: RecordSerializer[IO, V]
  ): ProducerSettings[IO, K, V] =
    ProducerSettings[IO, K, V]
      .withBootstrapServers("localhost:9092")

  override def run(args: List[String]): IO[ExitCode] = {
    val avroSettings: AvroSettings[IO] = AvroSettings(SchemaRegistryClientSettings[IO]("http://localhost:8081"))
    implicit val serializer: RecordSerializer[IO, AvroUnionExample] = avroSerializer[AvroUnionExample].using(avroSettings)

    val eventsToProduce: List[AvroUnionExample] = List(
      AvroUnionExample.Second(id = 2, "number-2", 2.0),
      AvroUnionExample.Third(id = 3, meterId = "meter-3", 3.0),
      AvroUnionExample.First(id = 1, name = "number-1", "occupation-1")
    )
    Stream
      .emits(eventsToProduce)
      .covary[IO]
      .map(record => ProducerRecords.one(ProducerRecord[Int, AvroUnionExample]("example", record.id, record)))
      .through(KafkaProducer.pipe(producerSettings[Int, AvroUnionExample]))
      .map(_.passthrough)
      .compile
      .drain
      .as(ExitCode.Success)
  }
}

It ends up failing with:

Exception in thread "main" org.apache.kafka.common.errors.InvalidConfigurationException: Schema being registered is incompatible with an earlier schema; error code: 409

I checked the Schema Registry and I see that it actually registers the Second subtype/record (which is the first record being produced) rather than the entire union containing all the records (First/Second/Third):

{
  "type" : "record",
  "name" : "Second",
  "namespace" : "com.experiments.calvin.AvroUnionExample",
  "fields" : [ {
    "name" : "id",
    "type" : "int"
  }, {
    "name" : "deviceId",
    "type" : "string"
  }, {
    "name" : "reading",
    "type" : "double"
  } ]
}

Do you know if this is a limitation of the underlying library or something that can be worked around?
Thank you so much for your time and your great work

derive optional record

I try to derive the Foo codec but failed:

@ final case class Foo(a:Int,b:String)
defined class Foo

@ Codec.derive[Option[Foo]]
cmd37.sc:1: magnolia: could not find Codec.Typeclass for type ammonite.$sess.cmd36.Foo
    in parameter 'value' of product type Some[ammonite.$sess.cmd36.Foo]
    in coproduct type Option[ammonite.$sess.cmd36.Foo]

also failed in this case:

@ final case class Bar(a:Int,b:Option[Foo])
defined class Bar

@ Codec.derive[Bar]
cmd40.sc:1: magnolia: could not find Codec.Typeclass for type Option[ammonite.$sess.cmd36.Foo]
    in parameter 'b' of product type ammonite.$sess.cmd39.Bar

am I doing something wrong?

Unsupported Avro type. Supported types are null, Boolean, Integer, Long, Float, Double, String, byte[] and IndexedRecord

I am trying to produce a key value pair of A, List[B] to kafka using avro, vulcan and a schema registry.

I am creating a serializer for list[B] like so

avroSerializer[List[B]](Codec.List[B](bCodec)).using(SchemaRegistrySettings(conf)

but at runtime I get

Unsupported Avro type. Supported types are null, Boolean, Integer, Long, Float, Double, String, byte[] and IndexedRecord

and when I create a breakpoint for that exception it breaks in package io.confluent.kafka.serializers.AvroSchemaUtils.getSchema

and the method looks like this which does not seem to support array

public static Schema getSchema(Object object) {
        if (object == null) {
            return (Schema)primitiveSchemas.get("Null");
        } else if (object instanceof Boolean) {
            return (Schema)primitiveSchemas.get("Boolean");
        } else if (object instanceof Integer) {
            return (Schema)primitiveSchemas.get("Integer");
        } else if (object instanceof Long) {
            return (Schema)primitiveSchemas.get("Long");
        } else if (object instanceof Float) {
            return (Schema)primitiveSchemas.get("Float");
        } else if (object instanceof Double) {
            return (Schema)primitiveSchemas.get("Double");
        } else if (object instanceof CharSequence) {
            return (Schema)primitiveSchemas.get("String");
        } else if (object instanceof byte[]) {
            return (Schema)primitiveSchemas.get("Bytes");
        } else if (object instanceof GenericContainer) {
            return ((GenericContainer)object).getSchema();
        } else {
            throw new IllegalArgumentException("Unsupported Avro type. Supported types are null, Boolean, Integer, Long, Float, Double, String, byte[] and IndexedRecord");
        }
    }

Am I doing something wrong? I wonder if I am having a dependency conflict and something is getting evicted but when I look at evicted in sbt everything is getting evicted for newer versions at least

Support for TopicRecordNameStrategy

How this library supports TopicRecordNameStrategy?

I have noticed that I can use this '.withValueSubjectNameStrategy' method to set the correct strategy but I struggle to create correct codec for it. Let see an example:

We have to case classes that represent different schemas:

sealed trait General

case class A(name: String) extends General

case class B(age: Int) extends General

How can I sort this our?

License and group ID for v1.0.0

  • Should we change the legal entity for the license for v1.0.0?
  • If so, to what legal entity are we changing?
  • If so, what should the new group ID be?

Release v1.0.0

Vulcan is already being used in production across several companies. We should therefore release a v1.0.0 to more explicitly communicate binary compatibility. Before committing to v1.0.0, it might be a good idea to squeeze in a few changes.

Codec[String] is not being able to decode a String value.

I have an issue when trying to decode an Avro message published in Kafka that was encoded with Vulcan's help, when consuming the message the field type is String as, I presume, it was inferred by Magnolia for the given case class, but then the code of the Codec[String] has a case in the pattern matching that rejects everything that it's not an Utf8

https://github.com/ovotech/vulcan/blob/master/modules/core/src/main/scala/vulcan/Codec.scala#L1796

I have provided a similar implicit codec that changes the type in the matching from Utf8 to String and it worked, so is this a bug and should I propose a PR? or am I doing something wrong?

I did try putting this config ("avro.java.string" as "String") in the AvroSettings and didn't work.

Thanks for all the work you have put into Vulcan!

Support reading no bytes

From what I have read of the Avro specification it seems like zero bytes is infact valid Avro (representing null), however calling fromBinary on a zero-length byte array with a Codec[Option[Something]] yields a Left, where I think it should yield a Right[None]

Derived codec can't decode after binary serde of encoded data

Failing test:

import cats.data.NonEmptyList
import org.apache.avro.generic.GenericData
import org.scalacheck.ScalacheckShapeless
import org.scalatest.flatspec.AnyFlatSpec
import org.scalatest.matchers.should.Matchers
import org.scalatestplus.scalacheck.ScalaCheckDrivenPropertyChecks
import org.apache.avro.Schema
import org.apache.avro.generic.{GenericDatumReader, GenericDatumWriter}
import org.apache.avro.io.{DecoderFactory, EncoderFactory}
import vulcan.Codec
import vulcan.generic._

import java.io.ByteArrayOutputStream

class Test extends AnyFlatSpec with Matchers with ScalaCheckDrivenPropertyChecks with ScalacheckShapeless {

  case class Abracadabra(qwe: Option[NonEmptyList[Int]])

  it should "decode after encoding and binary serde" in {

    def serde(schema: Schema)(a: Any): Any = {
      val writer = new GenericDatumWriter[Any](schema)
      val out = new ByteArrayOutputStream
      val encoder = EncoderFactory.get.binaryEncoder(out, null)
      val ba =
        try {
          writer.write(a, encoder)
          encoder.flush()
          out.toByteArray
        } finally out.close()

      val reader = new GenericDatumReader[Any](schema)
      val decoder = DecoderFactory.get.binaryDecoder(ba, null)
      val record = reader.read(null, decoder)

      record
    }

    val codec: Codec[Abracadabra] = Codec.derive[Abracadabra]
    val schema = codec.schema.toOption.get

    forAll { (x: Abracadabra) =>
      val encoded = codec.encode(x).toOption.get
      val serded = serde(schema)(encoded)

      val b1 = encoded.asInstanceOf[GenericData.Record] == serded.asInstanceOf[GenericData.Record]

      val r1 = codec.decode(encoded, schema)
      val r2 = codec.decode(serded, schema)

      println(x)
      println(b1)
      println(r1)
      println(r2)
      assert(r1 == r2)
    }
  }

}

It fails with:

AvroError(Error decoding Test.Abracadabra: Error decoding Option: Error decoding union: Missing alternative array in union)

The type of data that causes error is: Option[NonEmptyList[Int]]

Encouraging adoption of Vulcan

  • Tech Talks
  • Blog Posts
  • A migration guide from Avro4s to Vulcan showing how simple it is to drop in Vulcan as a replacement

Schema resolution: conform more closely with Avro spec

Writer schema validation is currently more strict than mandated by the Avro spec (https://avro.apache.org/docs/1.9.1/spec.html#Schema+Resolution), meaning schema evolutions that should be allowed are not.

  • Support promotion between different schema types, e.g. from int writer schema to long reader schema (#134)
  • Match named schema based on name rather than fullname (#136)
  • Allow matching named schema based on aliases

We may not need to handle schema name matching ourselves at all, since it's already done by the underlying Avro library.

Union codecs do not match by namespace

Given two events both called MyEvent that are records with the namespaces org.foo.events and org.bar.events, the expected behaviour would be to successfully decode a message matching org.bar.events if the respective codec was present in the union. However it fails as altMatching only seems picks up the first matching event in the union chain, which in this case would be the codec for org.foo.events.

Just want to check if this is indeed unexpected behaviour in vulcan, and what the behaviour should be - should both codecs be tested given the event or can altMatching be modified to match on the canonical name of the schema?

Happy to explain more if I haven't been clear here.

imapError message not returned when type decoded inside an Option

We have some codecs which bootstrap the String codec and convert values to our own internal enumeratum values.
The schema models the data as string or null, which we attempt to model as Option[OurEnumType].

The decoding works fine, except when an unexpected String comes through. The error message presented is
Exhausted alternatives for type org.apache.avro.util.Utf8 - rather than the underlying failure with our custom message.

Broadly our codec looks a bit like the following:

case class MyThing(value: String)
object MyThing {
  def fromString(s: String): Option[MyThing] = if (s=="thing") Some(MyThing(s)) else None
  implicit val codec: Codec[MyThing] =
    Codec.string.imapError(str => fromString(str).toRight(AvroError(s"$str is not what we want")))(_.value)
}

Is there a way to make the underlying error message bubble up? This issue led to a bit of head scratching because the stack trace didn't identify where the problem was either.

Proposed roadmap for Vulcan 2.0

Summary

Here are some thoughts on how to decouple the Vulcan API from the Java Avro SDK (JAvro), opening the way to adding an alternative
backend that implements Avro directly. Previously we discussed doing this by introducing our own representation of
encoded avro values, so that Vulcan would convert between these and user types and backends would convert between
these and Avro wire formats. Instead, I want to suggest that we convert the codecs into an algebraic datatype that can
traversed by a separate interpreter to convert directly between user types and an arbitrary backend representation.

This has a few advantages:

  • It avoids adding an extra layer of indirection at runtime.
  • Most of the work can be done incrementally as non-breaking changes in the 1.x series, as the implementation of Codec
    is invisible to users (whereas the representation of Avro values isn't.)
  • It reduces API surface area - we can keep the details of the Codec ADT package-private, whereas it's not clear we'd be
    able to do the same for a model of Avro values.

Roadmap

Changes in 1.x

  • Per #435, deprecate Codec.instance (which is coupled directly to the JAvro
    API) and replace most uses of it with a few primitives and combinators.
  • Convert codecs to a fully introspectable algebraic datatype. Following the example of UnionCodec
    in #435, convert all primitive codecs and combinators into named subtypes.
  • Refactor implementations of primitives and combinators into an interpreter of the newly introduced ADT. encode
    , decode and schema now delegate to the interpreter.
  • Deprecate encode, decode, and schema methods on Codec, in favour if explicit use of the interpreter, to
    prepare for fully decoupling Codec from JAvro.

Changes in 2.0

  • Remove Codec.instance - all codecs must be derived from primitives we provide.
  • Move methods for serialization and deserialization from Codec to live with JAvro-based interpreter.
  • Remove encode, decode and schema methods on Codec
  • Consider exposing an alternative representation of schemas directly on Codec, either as a raw json string or as
    our own structured represenation of schemas
  • AvroTypes types are no longer aliases for JAvro representations - instead they are either phantom types or our
    own model of Avro schemas
  • Codec API is fully decoupled from JAvro
  • Separate the JAvro-based interpreter into a new module, remove JAvro dependency from core module

Hopefully these changes won't impact most users too much, given that the most common use case is via the integration
with fs2-kafka.

Any feedback would be much appreciated!

Dotty/ Scala 3 support

if possible, we could add support for Dotty.
I will read into what is needed but given that Vulcan already supports scala 2.13, it should be simple.
Still need to check dependencies

Custom `DatumReader`/`DatumWriter`

The current implementation that converts to and from the Java GenericDatumReader/GenericDatumWriter's representation is unsatisfactory for reasons that have been previously discussed - instead it's been suggested that we reimplement Avro using scodec. Instead I think the best way forward would be to continue using the Java Decoder and Encoder (which handle the low-level mechanics of converting between byte streams and JVM primitives, while abstracting over the binary and json encodings) but have our own implementations of DatumReader and DatumWriter, which orchestrate the Decoder/Encoder operations to work with complete data structures. That gives us complete control of the representations we work with and avoids any extra indirection at runtime, while giving us the benefit of encoders/decoders that are likely to be better tested and more performant than anything we write ourselves, and will interoperate smoothly with the Confluent Kafka serdes.

Easiest to do after Codec has been converted to a data structure per #437 - we can then add functionality, separate from the Codec, to compile it (together with an optional schema) to a DatumReader or DatumWriter.

Relax validation of reader/writer schema type names during decoding

The Vulcan decoders for named types impose stricter validation that the underlying Avro schema resolver, which allows a structural match based on fields if the names don't match between reader and writer schemas (see discussion here.

That's a defensible design choice in itself, but in combination with Avro parsing bugs such as this one it results in decoding failures. Specifically, rendering a schema as json and then re-parsing it, as happens when a schema is published to a schema registry and then retrieved, can change the namespace of deeply nested types from null to the closest enclosing non-null namespace, as in the example given here. When a codec with this schema is used to publish and then read a message using fs2-kafka-vulcan, this results in an error because the record type's name in the schema retrieved from the schema registry doesn't match the name expected by the codec.

I'd suggest either removing the name validation entirely, or at least allowing a match based on name rather than fullname when the reader expects the null namespace. I'm happy to open a PR.

Require all `Codec` instances to have a schema

I propose that we change the type of the Codec#schema field from Either[AvroError, Schema] to just Schema, and throw an exception if trying to instantiate a codec for which a valid schema can't be generated.

This breaks referential transparency, but I think in this case it's justified. In the overwhelming majority of use cases, the desired schema is known in advance, so failure to instantiate indicates a programming error.

For edge-cases in which schemas are genuinely being determined at runtime, we could also provide methods returning Either[AvroError, Codec[A]], which also makes more sense since failure to produce a schema is semantically an error.

Support FIXED decimal type

Avro supports fixed-size binary with decimal logical type, but we don't. We should add support for this.

Support recursive record schemas

Codec.record does currently not support recursive types.

The following example:

import vulcan.Codec

final case class Recursive(next: Option[Recursive])

object Recursive {
  implicit def recursiveCodec: Codec[Recursive] =
    Codec.record[Recursive]("Recursive") { field => 
      field("next", _.next).map(Recursive(_))
    }
}

yields a StackOverflowError for Codec[Recursive].

scala> Codec[Recursive]
java.lang.StackOverflowError
  at scala.Option.fold(Option.scala:175)
  at vulcan.Codec$.record(Codec.scala:1468)
  at Recursive$.recursiveCodec(<pastie>:28)
  at Recursive$.$anonfun$recursiveCodec$1(<pastie>:30)
  at vulcan.Codec$.record(Codec.scala:1469)
  at Recursive$.recursiveCodec(<pastie>:28)
  at Recursive$.$anonfun$recursiveCodec$1(<pastie>:30)
  ...

Codec.derive also fails to derive a Codec with the following error message.

scala> Codec.derive[Recursive]
error: magnolia: could not find Codec.Typeclass for type Option[Recursive]
    in parameter 'next' of product type Recursive

Some issues migrating from Avro4s

Because Avro4s is too slow in writing binary (for our use case), we want to migrate to Vulcan. Compared to Avro4s the performance is very good! We see 10 to 12 times the throughput on a single thread.

Unfortunately, we do have some problems when migrating from Avro4s:

  1. optional union types get nested --> solved in #450
  2. missing the field mapper
  3. AvroName annotation is ignored --> split to #443
  4. field defaults are ignored
  5. generic parameters are not included in a record name --> proposal in #444

Optional union types get nested

  def main(args: Array[String]): Unit = {
    import vulcan.Codec
    import vulcan.generic._

    @AvroNamespace("ns")
    sealed abstract class Location extends Product with Serializable
    @AvroNamespace("ns")
    case class PostalAddress(city: String) extends Location
    @AvroNamespace("ns")
    case class GeographicLocation(lat: Double, long: Double) extends Location
    @AvroNamespace("ns")
    case class Thing(location: Option[Location])
    implicit val locationCodec: Codec[Location] = Codec.derive
    implicit val ThingCodec: Codec[Thing] = Codec.derive
    println(ThingCodec.schema.left.get.message)
  }

prints:

org.apache.avro.AvroRuntimeException: Nested union: [
    "null",
    [
        {"type":"record","name":"GeographicLocation","namespace":"ns","fields":[{"name":"lat","type":"double"},{"name":"long","type":"double"}]},
        {"type":"record","name":"PostalAddress","namespace":"ns","fields":[{"name":"city","type":"string"}]}
    ]
]

(with newlines added for clarity)

I expected Vulcan to silently add the null type to the union, not create a new union with the 'Location' union nested in it.

Missing a field mapper

Unfortunately we need to use case classes that have weird field names. They contain weird characters like @ and :. With Avro4s we defined a field mapper that would map these to valid names.

As a work around we could use the @AvroName annotation on the case class fields. Unfortunately we are still not allowed to use the @ character in the field name even though it is no longer relevant for the schema. This is probably related to the next item:

AvroName annotation is ignored

  def main(args: Array[String]): Unit = {
    import vulcan.Codec
    import vulcan.generic._

    @AvroNamespace("ns")
    case class Item(@AvroName("id2") id1: String)
    println(Codec.derive[Item].schema.right.get.toString(true))
  }

prints:

{
  "type" : "record",
  "name" : "Item",
  "namespace" : "ns",
  "fields" : [ {
    "name" : "id1",       // <----- "id1" instead of expected "id2"
    "type" : "string"
  } ]
}

Field defaults are ignored

  def main(args: Array[String]): Unit = {
    import vulcan.Codec
    import vulcan.generic._

    @AvroNamespace("ns")
    case class Item(id1: String = "foo")
    println(Codec.derive[Item].schema.right.get.toString(true))
  }

expected:

{
  "type" : "record",
  "name" : "Item",
  "namespace" : "ns",
  "fields" : [ {
    "name" : "id1",
    "type" : "string",
    "default": "foo"       // <-- expected but not present
  } ]
}

Generic parameters are not included in a record name

  def main(args: Array[String]): Unit = {
    import vulcan.Codec
    import vulcan.generic._

    @AvroNamespace("ns")
    case class SomeData(foo: String)
    @AvroNamespace("ns")
    case class Event[A](data: A)
    implicit val someDataCodec: Codec[SomeData] = Codec.derive
    println(Codec.derive[Event[SomeData]].schema.right.get.toString(true))
  }

Prints: Event, Avro4s gives Event__SomeData.

The Avro4s approach has the advantage of giving each type instance of Event a different name. This is relevant if the consumer of the schema uses code generation.

Note that we can not solve this with an @AvroName annotation as then the name would be same for every type instance of Event. (If that is what @AvroName is supposed to do, currently it does not seem to change the name when set on a case class.)

All code in this issue tested with Vulcan 1.3.8, Scala 2.12.15.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.