Giter Site home page Giter Site logo

jspecify / jspecify Goto Github PK

View Code? Open in Web Editor NEW
427.0 44.0 25.0 10.25 MB

An artifact of fully-specified annotations to power static-analysis checks, beginning with nullness analysis.

Home Page: http://jspecify.org

License: Apache License 2.0

Java 100.00%
java static-analysis kotlin android annotations bugfinding jvm-languages null nullability-analysis standard-library

jspecify's Introduction

JSpecify

An artifact of well-specified annotations to power static analysis checks and JVM language interop. Developed by consensus of the partner organizations listed at our main web site, jspecify.org.

Our current focus is on annotations for nullness analysis.

Status

Version 0.3 is relatively safe to depend on in your code. Or you can read a more detailed answer.

Things to read

See jspecify.org/docs/start-here.

jspecify's People

Contributors

artempyanykh avatar ascopes avatar cpovirk avatar cushon avatar dependabot[bot] avatar eamonnmcmanus avatar kengotoda avatar kevin1e100 avatar koppor avatar mernst avatar msridhar avatar netdpb avatar petukhovvictor avatar scolsen avatar wmdietl avatar wmdietlgc avatar wohops avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jspecify's Issues

Distinguishing signatures from implementation code - what about local classes?

We're attempting to specify what our nullness annotations mean when applied to "signatures" (method param/return types, field types, and type parameter bounds).

We also acknowledge they can be applied to almost any non-root type (except an outer type which is non-nullable by nature) wherever it is.

But we're attempting NOT to specify what they mean when applied to root types within "implementation code", and to let this be checker-specific behavior (at least for now).

What do we mean by "implementation code"? Currently the glossary says: "any code appearing inside the body of a method, constructor, lambda expression, static initializer, or instance initializer, or in the initialization expression (right-hand side) of a field declaration."

That means these potentially-nullable type contexts will be ones we don't address normatively:

  • A local variable declaration.
  • A cast expression.
  • An array creation expression.
  • An explicit type argument supplied to a generic method or constructor (including via a member reference), or to an instance creation expression for a generic class.
  • (Are there other cases?)

A big thing that is missing is: what about a local class (whether named or anonymous)? Are you still inside implementation code, unspecified behavior, or are the members of the local class back in the club?

The way we talk about implementation code seems to suggest that local classes are also implementation code. But we would want to validate the signatures of that local class against those of its supertype, wouldn't we? On the other hand, just because a "good" checker would do so doesn't necessarily mean we need to require it.

Can @NullMarked target fields, method parameters, or local variables? [working decision: no, but see #310]

From proposal comment thread by @brychcy:

But types can appear in the field's declaration (for generics or array contents) and there can be an initializer (which can contain type declarations as it can be an anonymous type or lambda).

Same for local variables and the declaration part of parameters.

Eclipse's @NonNullByDefault is supported in all these locations and the effect goes to the end of the initializer ( if present )

This also has the advantage that @NonNullByDefault doesn't need a @Target annotation, which would be problematic as the MODULE value is not understood pre-java9.

Yet I won't argue it is absolutely necessary in a common standard - I don't know how hard it would be for other tools to implement this.

No question these locations could be useful (though maybe mostly for implementation checking, with the possible exception of method parameters?), but I believe they don't define a "scope" per JLS, hence why they were originally excluded [edit: cpovirk adds: nor are they ever the element that "encloses" anything in the Java compiler API, a natural match to how we define scoping]. Moreover, anonymous inner classes' bytecode I believe doesn't indicate if they were defined "inside" one of these things, but maybe tools never need to look at bytecode for them?

One question I have is what the use cases are for placing @NonNullByDefault in these locations? How commonly do users do that?

Declaring that one annotation aliases or specializes another (`@Implies`) [working decision: no]

A meta-annotation, used like:

@Implies(Nullable.class)
public @interface MyNullable { ... }

... which declares the annotated annotation to have all of the semantics of the linked annotation (plus optionally more).

This could make for easier migration, it could be used by things like @PolyNull so that the user doesn't have to redundantly write @Nullable @PolyNull, it could be used by a library that parallels ours to remove runtime retention, and perhaps other uses as well.

Repository for java.* annotation-overlay files, hosted/maintained by us

Several tools provide specifications for the JDK and libraries:

Will we provide a central repository to provide codeanalysis-annotations for the JDK and libraries?
Eventually the hope should be that OpenJDK will directly include the annotations in their source code, but what do we do until then.
If so, how will we determine "correct" annotations? How will we verify submissions? What submissions will we accept?

This is important for nullness, but equally so for all other standard annotations.

Not-Null-By-Default semantics

In a not-null-by-default context, nearly all type uses are not-null by default:

  • Field, parameter, and return types are NotNull
  • Type arguments are NotNull
  • Array components and array dimensions are NotNull
  • Implicit and explicit upper bounds of type parameters are NotNull, see #12 (comment)

Exceptions are:

  • Local variables are @Nullable and flow-sensitive type inference is used to refine the type
  • Class declarations are @Nullable, see #7 (comment)
  • Cast and instanceof types depend on the expression type, that is:
	@Nullable Object no = …;
	@NotNull String s = (String) no;

will treat the cast as @Nullable and therefore the assignment will give a warning.

Decide final package name

Some considerations:

  • Should it be under any existing known organization, or register a new domain and use reverse-domain naming, or just stake out a new top-level package?

  • Should the idea of being annotations-only be built into the name?

  • Should it be a "boring" prosaic descriptive name, or something more codenamey?

  • Finally, what should be the dang name :-)

Reference implementation scope

The question was initially raised by @amaembo:

Reference Implementation scope. I propose that it should be as narrow as possible. It should not be a production-ready static analysis tool. Probably it should be limited to a library + testcases which 1) provides a set of model classes to represent new types, similar to the corresponding Javac classes (probably extending them), 2) provides a way to get a type of given code element (field, method, parameter), 3) for two given types A and B checks whether A is a subtype of B. This is enough to make unambiguous interpretation of annotations. Producing actual warnings should be out of scope of RI, thus we should not care whether method call changes field or not, how we should treat synchronization, object initialization, etc. This could be a part of errorprone, as a separate Google project which could depend on RI library. Otherwise it would be too much for our purposes.

Require a finding on a contravariant parameter type? [working decision: no]

In the draft proposal, there's a part that allows for overrides to have value parameters with contravariant nullability:

The method’s parameter types may have wider nullability than the corresponding superparameters' types, meaning, it suffices for superparameters to be type-convertible to the overriding method’s parameter types (while base types have to follow the normal rules of the compiler so overriding works as expected).

That might be a problem for the Kotlin compiler since in pure Kotlin value parameters are not contravariant in overrides and we apply the same logic when looking at the annotated Java code.

The reason we don't allow contravariant parameters in override is basically the same as in Java: they might be considered as overloads. In Java, one can not override with a wider-typed parameter (types should be equal). The same applies for Kotlin, but we have nullability as a built-in part of the type system.

interface A {
    fun foo(x: String)
}

class B : A {
    override fun foo(x: String) {
        
    }
    
    @JvmName("fooNullable")
    fun foo(x: String?) {} // is a different overload for `foo`
}

fun main() {
    B().foo("") // resolved to the not-nullable
    B().foo(null) // resolved to nullable
}

That means that when we see contravariant nullability in Java annotations, such code is assumed to be illegal thus we ignore annotations info for such parts completely as inconsistent:

interface A {
    void foo(@NotNull String x);
}

class B implements A {
    @Override
    public void foo(@Nullable String x) {

    }
}

B::foo has signature like fun foo(x: String!), i.e. it has unknown nullability from Kotlin point-of-view.

But at the same time it's fine transforming from unknown nullness to both NotNull and Nullable:

interface A {
    void foo(String x, String y);
}

class B implements A {
    @Override
    public void foo(@Nullable String x, @NotNull String y) {

    }
}

In the above example B::foo is perceived by Kotlin as fun foo(x: String?, y: String)

Nullness: Generic type semantics

Type parameter declaration

A type parameter has both an upper and a lower bound.
The top of the type hierarchy is @Nullable j.l.Object and the bottom is @NotNull j.l.Void.
The null literal has type @Nullable j.l.Void.

See #12 (comment) for a discussion of upper bounds.

An annotation on the type parameter declaration itself can be used to annotate the lower bound:

class NullonlyData<@Nullable T extends @Nullable Number> { T g; }

Note that the lower bound annotation is important to decide what assignments are possible.
Within NullonlyData, null can be assigned to field g, because NullonlyData can only be instantiated with an @Nullable type argument.
We feel that this is useful in rare cases and propose to forbid annotating a type parameter declaration, even the short form class C<@Nullable T>. Instead the explicit class C<T extends @Nullable Object> should be used.

Type parameter use

An annotation on a type parameter use overrides the upper and lower bound annotation.

class Cache<T extends @Nullable Object> { @Nullable T cache; }

Within class Cache it is possible to assign null to field cache, because of the @Nullable annotation.

Wildcards

A wildcard can have an annotation on the wildcard itself and on the explicit bound. The annotation on the wildcard itself applies to the bound that is not explicitly specified.

class List<T> {  // @Nullable Object upper bound; @NotNull Void lower bound
    T f;
}

- List<?> l1; // @Nullable Object upper bound; @NotNull Void lower bound
    Read of l1.f: @Nullable Object, from upper bound of T
    Write of l1.f: not possible, as lower bound is @NotNull Void

- List<@Nullable ?> l2; // both bounds @Nullable
    Read of l2.f: @Nullable Object
    Write of l2.f: possible, but only with `null`

- List<? extends Number> l3;
    Read of l3.f: The type depends on the upper bound from the type parameter declaration
        and the bound of the wildcard, so here we get the intersection of @Nullable Object
        and @NotNull Number, so in effect we can read @NotNull Number
    Write of l3.f: @NotNull Void lower bound, no writes

- List<? extends @Nullable Number> l4;
    Read of l4.f: effectively @Nullable Number
    Write of l4.f: @NotNull Void lower bound, no writes

- List<? super Number> l5;
    Read of l5.f: Upper bound from type parameter declaration, @Nullable Object
    Write of l5.f: @NotNull Number types or their subtypes can be written.

- List<? super @Nullable Number> l6;
    Read of l6.f: @Nullable Object
    Write of l5.f: @Nullable Number types or their subtypes can be written.

what happens to type variables when @NullnessUnknown (and @NotNull) type parameters are instantiated?

Capturing a comment thread here

Let's say there's a class Foo that's not annotated at all, ie., T's effective specifier is "unknown nullness":

public class Foo<T> {
  private T value;
  T getOrThrow() { return checkNotNull(value); }
  T getOrNull() { return value; }
  set(T value); { this.value = checkNotNull(value); }
}

Now in some other Java file you do:

@DefaultNotNull
public class Bar {
  String baz(Foo<String> x) {
    return x.getOrNull();
  }
}

Meaning x has the fully explicit type @NotNull Foo<@NotNull String>. So when considering method calls on x, should we (unsoundly) assume that getOrNull() will return a @NotNull String? That set's parameter is a @NotNull String?

@cpovirk points out:

I commented on this somewhere else, saying that it was pretty scary: Even if Foo is @DefaultNullnessUnknown, then sure, Bar's Foo<String> can mean Foo<@NotNull String>. But I hope that doesn't mean that x.getOrNull() is considered to return @NotNull String. I'm hoping that @NotNull gets trumped by the effectively @NullnessUnknown T return type of getOrNull(), per:

"If the bound’s effective specifier is @NotNull or @NullnessUnknown then that is the type variable usage’s effective specifier."

I wrote the sentence quoted above but I don't think I anticipated this application of it. We could possibly do it that way but I think we'll want to be sure either way.

In general I believe the mental model I've been working with is that when a type parameter gets instantiated, the instantiation gets "pasted" everywhere the corresponding type variable is used. So T getOrNull() becomes @NotNull String getOrNull(). I think that's what you want to do if T is declared as <T extends @Nullable Object> But if T is bounded by @NullnessUnknown, as in the above example, then we could indeed just consider occurrences of T to have unknown nullness no matter how T is instantiated. And while the language quoted above just discusses type variables, and we've been trying not to specify implementation checking and what happens on method calls, it does seem like we want to formalize what Foo<@NotNull String> means in terms of methods declared in Foo.

Playing with IntelliJ's code completion popups, it seems that Kotlin, given x: Foo<String> indeed types x.getOrNull() : String!, i.e., with unknown nullness as @cpovirk would like. Given y: Foo<String?>, it types y.getOrNull() : String?, i.e., as @Nullable (which also seems sensible but i'm not sure how to specify that).

I'll note that the inverse situation also creates headaches. What I mean is, if Foo was instead declared class Foo<T extends @NotNull Object>, and given a variable Foo<@NullnessUnknown String> z, then should we expect z.getOrNull() to return a @NotNull String or a @NullnessUnknown String? Should we expect z.set(expr) to expect a @NotNull String argument or @NullnessUnknown String? I think the answer here is that it's sensible to use @NotNull String for both method result and method parameter type, though it is weird to get any @NotNull even though we had Foo<@NullnessUnknown String> z. Especially considering that if Foo was declared class Foo<T extends @Nullable Object> then we presumably would expect the same z.getOrNull() to return @NullnessUnknown String.

Maybe the asymmetry pointed out in the last example is ok. I've been trying to avoid that very asymmetry with wildcards, but if we consider it ok here then I think we may need to revisit it there for consistency.

How to handle weird methods like `requireNonNull(Object)`

This method currently looks like so:

public static <T> T requireNonNull(T obj) {
  // throw NPE if obj is null, otherwise return obj
}

A method like this is supposed to be used when you have a @UnknownNullness Foo or a @Nullable Foo and you want to get a @NotNull Foo instead. (Users need this; we can't allow them to just cast because this would require bytecode munging.)

A user with an expression that is either nullable or of-unknown-nullness needs to be able to call this. If the expression is already non-nullable, it should probably work as well, and tools can offer a separately configurable "you probably don't need to do this" warning.

And of course, what they get back should be non-nullable.

So would the signature look like this?

public static <T> @NotNull T requireNonNull(@Nullable T obj) {
  // throw NPE if null, otherwise return it
}

Assuming that (or something like it) fulfills the requirements, then there's still another question.

What's very unusual about this method is that if it succeeds then a variable passed in for obj can automatically be presumed by the inferencer to be non-nullable itself, whether the return value is used or not. A tool should view it the same as it would an explicit if-throw pattern.

We wouldn't want every tool to have to keep its own hardcoded list of methods that have this property, so do we need to provide a special annotation to cover this case?

Lambda/method references and nullability

The question was initially raised by @amaembo:

What about lambdas/method references? Suppose we have a function defined as interface Function<@Nullable A, @Nullable B> { B apply(A a);}. Then declare a lambda like Function<String, String> x = a -> a.trim(); (note the declaration is unnannotated). A question: should we issue a nullability violation warning here?

DefaultNotNull and the split-package case

Recently we stumbled across the following problem. User had two versions of Spring framework library in the class-path, namely spring-core:4.3.23.RELEASE and spring-core:5.0.9.RELEASE. Spring 5 has package-level NotNull annotation in org.springframework.core.annotation.package-info (org.springframework.lang.NonNullApi to be precise), while Spring 4 has no such annotation. Several methods have overwritten nullability, e.g. org.springframework.core.annotation.AnnotationUtils.findAnnotation is marked in Spring 5 as Nullable, but not marked at all in Spring 4. So normally for Spring 4 IDEA should use "unknown" nullability (or try to infer it from method bytecode as we sometimes do) while for Spring 5 it should use "nullable" nullability. However when both present in the classpath and Spring 4 comes first, IDEA finds the findAnnotation method from Spring 4, sees that it has no annotation, then checks the package annotation and sees the package-info from Spring 5 (there's no package-info in Spring 4) which says that all methods in this package are not-null by default. Thus IDEA assumes that findAnnotation is not-null, which results in false-positive warnings.

I agree that it's a questionable project structure, but as we can see, it happens. So the question is whether package-level annotations may affect classes from different class-path roots (e.g. different jar files). This is not a problem with JPMS, as split-packages are disabled there, but people still rarely fully migrate to JPMS. We should have a clear statement whether package-level annotation affects every class in given package, regardless of the class-path root (thus the observed behavior is intended and the only way user can get rid of the false-positive warnings is to fix the project setup), or package-level annotations affect only that class-root where they are defined (thus the observed IDE behavior needs to be fixed). In Spring JavaDoc there's no clear statement whether org.springframework.lang.NonNullApi could affect different class-path roots, and we are inclined to fix this particular case to make user happy. But for our project we may settle with different decision.

Related to #8.

Nullness: Annotating wildcards and their bounds

Extracting this from #19 for better granularity.

What we seem to agree on

  • Wildcard bounds can be annotated, as in Foo<? extends @Nullabe Bar>
  • There's a parallel with type parameter bounds, and it would be logical to apply the same rules

The current rule for type parameter bounds

In the context of @DefaultNullable/@DefaultNotNull if a type parameter has no explicitly annotated bound, its bound is considered to be annotated according to the specified default.

Questions

  • Is each of the following forms allowed and if yes, what does it mean: Foo<@NotNull ?>, Foo<@NotNull ? extends @Nullable Bar>, Foo<@NotNull ? super @Nullable Bar>
  • Does the default specified by @DefaultNullable/@DefaultNotNull apply to the wildcard itself? (If we give that any meaning while answering the previous question)
  • Does the default apply to explicit bounds if they are not annotated? E.g. does Foo<? extends Bar> become effectively Foo<? extends @NotNull Bar> when in the scope of @DefaultNotNull?

Unbounded wildcard (?)

It seems that we might need to allow annotating an unbounded wildcard Foo<@NotNull ?> because its bound is not always denotable, e.g. for the case of F-bounded recursive types, e.g.:

interface C<T extends C<T>> {
    T get();
}

It looks like C<?> and C<? extends Object> may not mean the same thing here, but in fact JLS §4.5.1 says this:

The wildcard ? extends Object is equivalent to the unbounded wildcard ?.

And the following code compiles correctly:

    void test(C<?> unbounded, C<? extends Object> bounded) {
        unbounded.get().get();
        bounded.get().get();
    }

The bounds from the declaration site are implicitly applied to the wildcard even if it declares only Object as its explicit bound, but then it's hard to tell how an annotation on the Object should interact with those implicitly applied bounds. So, for now, we have it as an open question.

Bounded wildcards

A wildcard type in Java can specify either an upper bound (? extends Foo) or a lower bound (? super Foo). The bounds are normal type usages, so they can be annotated explicitly as @NotNull or @Nullable.

By analogy with type parameter bounds, it would make sense to apply defaults to unannotated wildcard bounds. E.g. Foo<? extends Bar> becomes effectively Foo<? extends @NotNull Bar> when in the scope of @DefaultNotNull.

"Write-only list of not-null Foo"

The intuition for List<? super Foo> is roughly "a write-only list of Foo". One may want to say something like "write-only list of not-null Foo", or "a list where you can add only not-null Foo's".

Observation: List<? super @NotNull Foo> does not capture this intent because it can be assigned List<@Nullable Foo> (@Nullable Foo is a supertype of @NotNull Foo).

To express this intent, one could use the fact that each wildcard implicitly has two bounds, so that ? super Foo means actually "lower bound Foo, upper bound Object(and? extends Foois "lower bounds Bottom, upper bound Foo", where Bottom is the subtype of all types, i.e. the empty type, like Kotlin'sNothing`).

So, the "write-only list where you can only add not-null Foo's" would be a list of "lower bound @NotNull Foo upper bound @NotNull Object). We could adopt a convention that this can be expressed as List<@NotNull ? super @NotNull Foo> where the annotation of the ? applies to the bound that is not explicit.

Some more examples of this convention:

Supertype List<@NotNull Foo> List<@Nullable Foo>
List<@NotNull ? super @NotNull Foo> subtype not a subtype
List<@Nullable ? super @NotNull Foo> subtype subtype
List<@Nullable ? super @Nullable Foo> not a subtype subtype
List<@NotNull ? super @Nullable Foo> * not a subtype subtype
-- -- --
List<@NotNull ? extends @NotNull Foo> subtype not a subtype
List<@Nullable ? extends @NotNull Foo> * not a subtype subtype
List<@Nullable ? extends @Nullable Foo> not a subtype subtype
List<@NotNull ? extends @Nullable Foo> subtype subtype

* Inconsistent bounds

Notes on the related discussions are available here.

How to conceptualize subtyping for legacy-nullness

It's beyond debate that @NotNull Foo <: @Nullable Foo. When @LegacyNull Foo comes in, well, it should still be uncontroversial that assigning from not-null to legacy is safe, and that assigning from legacy to nullable is safe.

But then the questions of assigning legacy to not-null, and/or assigning nullable to legacy, are not as clear. I would say there are two choices of rule sets that both make good sense:

(1) The strict interpretation: neither of these cases is assignable; both are unsound.

(2) The lenient interpretation: both are assignable, because we don't have enough basis for complaint and we will annoy users.

Interestingly, (1) leads to a "normal" subtype arrangement of not-null <: legacy <: nullable where the anti-symmetric property is preserved. (2) leads to breaking not just anti-symmetry but also transitivity (we want nullable <: legacy and legacy <: not-null but we don't want nullable <: not-null!).

I think a clean solution may be this: assignability rules really are defined conservatively as in (1). But we also have something akin to Java's "unchecked conversion". Even though nullable </: legacy and legacy </: not-null, you can still make that conversion anyway IF you @SuppressWarnings("nullness:legacy") (let's not debate the name though).

What's nice about this is that projects can decide whether they want to be strict or lenient based on whether they apply that warning suppression at a broad level or not. I think it's also much easier to think about this way, rather than saying only "not-null <: nullable but legacy is a special thing sitting off to the side somewhere".

Ability to correctly annotate without immediately breaking callers (e.g. `@Migrating`)

Motivation

There is a huge body of existing Java code that only sparingly uses annotations, e.g. some usage of @Nullable.
How can we make it as easy as possible to transition such code to Java with codeanalysis-annotations, where tools enforce correct usage.

Let's take an un-annotated Java file:

class Demo {
  Object m(Object p) ...
}

This class can be used from a context that enforces, e.g., null safety.

@NotNullByDefault
class User {
  void use(Demo d) {
    d.m(null).toString();
  }
}

The code analysis can now either make optimistic or conservative assumptions about the signature of Demo or introduce platform types to check correct usage.
Let us assume that User produces no compile-time errors.

We now want to convert Demo to also be null safe:

@NotNullByDefault
class Demo {
  @Nullable Object m(Object p) ...
}

The code analysis can now ensure that the implementation of Demo is safe.
However, now the usage in User will produce two errors:

  • passing null to m, which now defaults to @NotNull
  • dereferencing the @Nullable return value of m

Because Demo is now fully annotated (considering the `@NotNullByDefault and explicit annotations) no optimistic defaults would be assumed or no platform types would be generated.

If there are many uses of an API, we need a way to help with this migration.

Proposal

We introduce a new declaration annotation tentatively named @UnderMigration, which is applicable to declarations of fields, methods, type parameters, types, and packages.

In our example, we could annotate Demo as:

@UnderMigration(since = "2019-01-30)
@NotNullByDefault
class Demo {
  @Nullable Object m(Object p) ...
}

This marks Demo as being newly annotated and gives code analysis tools the additional information that the new signatures might produce many warnings.
For example, a tool could decide to initially continue to use platform types, then issue warnings instead of errors for such APIs, and then finally treat the API as final.

The new @UnderMigration annotation separates the semantic information of all codeanalysis-annotations from the migration issues.
The API is annotated with the semantically correct annotations and one annotation can be used for migration instead of mixing this concern with every check.

The annotation would look roughly like:

@interface UnderMigration {
  // yyyy-mm-dd since when the API is under migration.
  String since();

  // Checkers that are under migration.
  // Empty default signifies all checkers.
  // Strings identify checkers, in the same format as used for @SuppressWarnings
  String[] checkers() default {};
}

The since attribute conveys the date when migration began.
Tool users can use this information to decide how to use the signatures.

The checkers attribute conveys which checkers are under migration.
For example, some API might have been converted for null safety, but not yet for @CheckReturnValue.
The strings follow the same format as what @SuppressWarnings would use to suppress warnings from a particular checker.

Applicability to methods, classes, and packages allows nested specification of APIs, e.g. this whole package hasn't been transitioned for null-safety and this class additionally hasn't been transitioned for CRV.

Discussion

How should migration status be designated?

The proposal above includes a single since date that allows users to decide how to handle signatures.
This has the advantage that the API doesn't need to be changed and users are free to decide on severity.

Alternatives considered:

  • Use some enum constants to mark how far along the API is. E.g. initially something would be @UnderMigration(status = MigrationStatus.ALPHA), then change to @UnderMigration(status = MigrationStatus.BETA), before becoming @UnderMigration(status = MigrationStatus.FINAL), which is equivalent to having no annotation.
    Compared to the since date this has the disadvantage that the API needs to change to use different migration statuses.

  • Similarly, use an enum that describes severity e.g. Usage.INFO, Usage.WARN, and Usage.ERROR. This has the disadvantage that it prescribes tool behavior, which we don't want to prescribe in the API.

Instead use attributes on the analysis annotations

Instead of marking an API as @UnderMigration the type qualifiers could convey migration status, e.g. as @Nullable(since = "2019-01-28").
This has several disadvantages:

  • we need to add the same attributes to all code analysis annotations, leading to duplication and maintenance efforts
  • type use annotations should generally be short
  • it seems error prone to require every parameter/return type to specify different migration levels

Instead, the checkers attribute on @UnderMigration gives us the possibility to mark API as under migration for a particular checker.

Instead use three possible qualifiers

Alternatively, we could give developers the option to specify the "third option" explicitly, e.g. by using @LikelyNonNull or some such. Tools could then choose to interpret these as platform types or handle them optimistically or conservatively.
This has several advantages:

  • we need a migration option for every code analysis check
  • we mix migration and specification issues
  • it seems hard when to choose the third option relative to the other options.

Separating migration issues into @UnderMigration gives us one concept that applies across all code analysis checkers.

Why allow use on type parameter declarations?

This allows to also migrate the annotations on a type parameter bound:

class C<@UnderMigration(since = "2019-01-10") T extends @Nullable Data> {}

Changing the upper bound of a type parameter has an impact on possible instantiations and wildcard uses.
The alternative would require marking at least the whole class as under migration.

Usage in the JDK or Android sources

The @UnderMigration annotations should be usable instead of using @RecentlyNullable/@RecentlyNonNull.
An API would be annotated with a migration date and tools can decide when they want to start enforcing the semantics of these annotations correctly.

Checker Framework Nullness Checker (CFNC) interaction

The CFNC provides stronger guarantees and more fine-grained specifications to express nullness, including initialization, map keys, and polymorphism.

CFNC currently supports many existing nullness annotations and treats them as aliases for its own nullness annotations.
To support JSpecify nullness annotations, this needs to be slightly tweaked, as now multiple annotations need to be allowed. For example, a method can be annotated as:

@JSp.Nullable @CFNC.PolyNull String lower(@JSp.Nullable @CFNC.PolyNull String in) {...}

Instead of treating these two annotations as conflicting, the CFNC should use the more specific information, which will be the CFNC annotations (otherwise there would be no need to add a CFNC annotation).
This allows us to correctly specify that something is @JSp.Nullable, but at the same time allow the CFNC to provide a more fine-grained view and suppress the error.

Map.get is handled specially by the CFNC. The new annotation will be:

class Map<K, V> {
  @JSp.Nullable V get(Object key) …
}

If the CFNC KeyFor Checker determines that the element is a map key, it can refine the get return type.

Pre-conditions can specify conditions that have to hold before a method can be invoked. A field that is declared as @JSp.Nullable can be refined to @NotNull by a pre-condition.

Post-conditions can specify conditions that have to hold after a method returns. A field that is declared @JSp.Nullable can be refined to @NotNull by a post-condition.

A @MonotonicNonNull field can remain null initially, but once it is set to something not null, it will stay not null. Such a field would be declared as @JSp.Nullable @CFNC.MonotonicNonNull. This gives the conservative information that the field might be null and allows the CFNC to refine the type as required.

Object initialization is a tricky topic in Java. A field that is @NotNull will remain null until it is initialized. The CFNC additionally supports the Freedom-before-Commitment (FbC) type system to keep track of object initialization.
JSpecify and FbC annotations are orthogonal. We will not require that a JSp analysis raises initialization errors, reducing the number of false positive warnings and not requiring FbC annotations. A CFNC user can then gradually add FbC annotations to ensure safe object initialization.
(ErrorProne prevents some method invocations in constructors, so some initialization issues are prevented.)

We need to be careful to not require nullness errors that might be prevented by a more fine-grained specification that a static analysis tool could support.

Defaulting for upper bounds in JSp as discussed #12 (comment) is different from the CFNC default.
We will have to analyze the impact of this change.

[Edited by kevinb9n to s/CodeAnalysis/JSpecify/]

Reassigning a non-null parameter with a nullable expression? [working decision: expect a finding]

Imagine a method:

void printHierarchy(@NotNull Element e) {
  while (e != null) {
    System.out.println(e);
    e = e.getParent();
  }
}

Note that notnull parameter is reassigned, eventually to null. Should this be allowed? Whatever we choose, it probably should apply to local variables as well. I see 3 alternatives:

  1. Disallow this, since it's assigning null into a non-null variable.
  2. Allow this and say that @NotNull only applies to the argument passed into the parameter. In the case of local variable, to its initializer.
  3. Leave it to be checker-dependent, e.g. since it's not exactly about signatures.

Kotlin disallows such code. IntelliJ also did, but after many complaints we started to support this, because apparently it's a relatively frequent pattern, and it's too convenient to write loops this way.

should some type arguments not be affected by surrounding defaults?

Consider a class like Guava's ImmutableList, whose type parameter would presumably be declared as class ImmutableList<T extends @NotNull Object>, since the class is null-hostile. Now in another source file:

class Foo {
@DefaultNullable
ImmutableList<String> names() {...}

@DefaultNullnessUnknown
ImmutableList<String> values() {...}

ImmutableList<@NullnessUnknown String> keys() {...}

@DefaultNullnessUnknown
List<String> modifiable() {...}
}
  • Should we consider names() to return a ImmutableList<@NotNull String> despite the conflicting default? That seems convenient but is non-obvious from looking at Foo alone.
  • Should we expect the @NotNull to be made explicit, since it conflicts with the default? That avoids the non-obvious problem from the last question but would require "spraying" essentially redundant @NotNull annotations.
  • Should we even worry about @DefaultNullable, since we hope it's rarely used? If it's rarely used then maybe requiring explicit @NotNull is just tolerable.

What about values(), which has unknown default? I think things get extra-tricky here, since this will happen in unannotated legacy code that happens to be using ImmutableList, and it seems implausible, therefore, to require explicit @NotNull qualifiers, therefore (at least when there's no default annotation; maybe it's plausible with explicit @DefaultNullnessUnknown but we've so far tried not to distinguish "no default" from @DefaultNullnessUnknown).

It seems desirable to interpret values() to return ImmutableList<@NotNull String> regardless. But does that match the user's intuition in the context of "unknown nullness by default"? Possibly not but on the other hand there seems little harm at least in this example.

One counter-point could be consider an example like keys(). Here the user was really clear that they want to consider the returned ImmutableList's elements to be considered @NullnessUnknown, and it seems problematic to silently "upgrade" that to @NotNull elements. On the other hand we could consider such a use of @NullnessUnknown to be invalid, though that would be the first time we've considered making unknown nullness invalid. In the case of ImmutableList that may be plausible, but I'm not sure we can make cases like keys() invalid in general.

One other point maybe related point is to consider modifiable(), assuming List is declared to allow nullable elements, i.e., interface List<T extends @Nullable Object>. Here we presumably would interpret modifiable() to return effectively List<@NullnessUnknown String>, and given that it seems potentially confusing if values(), having the same annotations otherwise, effectively returns ImmutableList<@NotNull String>. But again, while it feels inconsistent maybe there is just no harm.

Consider making nullness fully "tri-state" instead of bi-state-with-some-files-unannotated

@abreslav @kevinb9n one thing I've been wrestling with around nullness annotations is whether and how unannotated parameters and method results have their place.

First off, they do seem to have their place, e.g., in Map.get() it can be convenient to leave the result unannotated. That's also a good way to be compatible with more powerful checkers and additional annotations, such as checkerframework.org's type system around known map keys. It seems a similar issue comes up around checkerframework.org's @PolyNull, where we'd probably want to consider any @PolyNull parameters and method results unannotated.

If unannotated types have their place, though, then we have to make sure we accommodate that with defaulting annotations. For instance, @DefaultNotNull can't be placed on Map if we want to leave get's result unannotated. The user would have to instead use @DefaultNotNull on each Map method, or annotate every single type use in the interface explicitly, just so get's result can be unannotated.

Eclipse's annotations seem to have a way of getting around this: it's possible to un-default a method inside a non-null defaulted class, for instance.

Another option would be an explicit way of, well, annotating a given type use as unannotated, e.g., for Map (leaving aside whether Map is a good candidate for defaulting in the first place):

@DefaultNotNull
public interface Map<K extends @Nullable Object, V extends @Nullable Object> { ...
  public @Unannotated V get(K key);
}

While this doesn't look very elegant, it is appealing to allow considering annotations defined by more powerful checkers as effectively meaning "unannotated". For instance, @PolyNull could be interpreted as unannotated, e.g., by placing some marker annotation we define on @PolyNull's definition.

My strawman proposal here is:

  • Allow @DefaultNotNull on methods in addition to classes and packages
  • Add a boolean parameter or similar to allow "canceling out" @DefaultNotNull from a larger scope
  • Allow @DefaultNotNull(false) to be placed as a meta-annotation on other annotations. It could then be placed on @PolyNull or map-related annotations defined for more powerful checkers.

Thoughts?

Nullability of a type based on generic parameter depending on the specific argument

Consider the following class:

interface Box<T> {
    T get();
    void set(T t);
}

Having a type Box<@Nullable String> or Box<String?> in Kotlin, what's the return type of Box.get then?
We had an idea to have a special annotation @StrictNullness for type parameters
that makes nullability of types based on them depend on nullability of the specified type argument

Nullability of Map.get() and similar methods

The question was initially raised by @amaembo:

Map.get(). If it’s annotated via Nullable, we would have a gazillion of false-positives, because often people know which elements Map contains. It’s better to leave such methods unnannotated which should mean “null may or may not be returned, it’s beyond the type system and up to the programmer to deal with it”. However you cannot leave method as unnannotated under the DefaultNotNull. Should we provide a way to cancel DefaultNotNull temporarily (for given class/method)? Note that Eclipse annotations can do this.

TYPE_USE locations and their semantics for nullness

In bytecode, a TYPE_USE annotation uses a TargetType constant to store what an annotation applies to. It is useful to go through all locations and discuss their semantics, also to ensure we cover all source locations.
A typepath further refines where in the type the annotation applies. Here it is enough to distinguish top-level from non-top level uses.
For nested types, only the deepest nested type can be annotated. All enclosing types are implicitly NotNull and no explicit annotations on outer types are allowed. Note that type arguments to outer types might still be annotated. (This is a bit tricky to check, because depending on the type, an empty type path might be legal or not.)
The reference implementation should enforce correct usage of type annotations.

  • CLASS_TYPE_PARAMETER(0x00)
    Annotation on the type parameter declaration. The lower bound is always NotNull, so forbid annotation.

  • METHOD_TYPE_PARAMETER(0x01)
    Annotation on the type parameter declaration. The lower bound is always NotNull, so forbid annotation.

  • CLASS_EXTENDS(0x10)
    Annotation on the class extends and implements clauses.
    On the top-level annotations make no sense. In type argument positions they are allowed.
    For example: class C extends @Nullable List<@Nullable String> { … }
    The super-class/super-interface are always not-null and no annotations are allowed.
    Type arguments in those types can use annotations.

  • CLASS_TYPE_PARAMETER_BOUND(0x11)
    Annotation on the type parameter bound. Can occur, both on the top-level and inside.

  • METHOD_TYPE_PARAMETER_BOUND(0x12)
    Annotation on the type parameter bound. Can occur, both on the top-level and inside.

  • FIELD(0x13)
    Can occur with arbitrary path.

  • METHOD_RETURN(0x14)
    Can occur with arbitrary path.

  • METHOD_RECEIVER(0x15)
    Method receiver types are always non-null. Forbid all annotations. Even for a generic class, I don't think the receiver ever needs an annotation.

  • METHOD_FORMAL_PARAMETER(0x16)
    Can occur with arbitrary path.

  • THROWS(0x17)
    Types in throws clauses are always non-null and no explicit annotations are allowed. No type arguments are allowed for exception types.
    For example void foo() throws @Nullable MyException { … } is forbidden.

  • LOCAL_VARIABLE(0x40, true)
    Can occur with arbitrary path.

  • RESOURCE_VARIABLE(0x41, true)
    Can occur with arbitrary path.

  • EXCEPTION_PARAMETER(0x42, true)
    Types in catch clauses are always non-null and no explicit annotations are allowed. No type arguments are allowed for exception types.

  • INSTANCEOF(0x43, true)
    See discussion about casts/instanceof in separate issue. The top-level annotation should depend on the cast expression. Type arguments could be annotated, but JVM won’t check.

  • NEW(0x44, true)
    An object creation is always non-null and no top-level annotations are allowed.
    Type arguments can use annotations.

  • CONSTRUCTOR_REFERENCE(0x45, true)
    A constructor reference is always non-null and no top-level annotations are allowed.

  • METHOD_REFERENCE(0x46, true)
    A method reference is always non-null and no top-level annotations are allowed.

  • CAST(0x47, true)
    See discussion about casts/instanceof in separate issue. The top-level annotation should depend on the cast expression. Type arguments could be annotated, but JVM won’t check.

  • CONSTRUCTOR_INVOCATION_TYPE_ARGUMENT(0x48, true)
    Can occur with arbitrary path.

  • METHOD_INVOCATION_TYPE_ARGUMENT(0x49, true)
    Can occur with arbitrary path.

  • CONSTRUCTOR_REFERENCE_TYPE_ARGUMENT(0x4A, true)
    Can occur with arbitrary path.

  • METHOD_REFERENCE_TYPE_ARGUMENT(0x4B, true)
    Can occur with arbitrary path.

  • UNKNOWN(0xFF)
    Should never occur.

Write up a short overview/charter of this project

I can probably just do this as an edit to the front-page readme file.

My intention is to cover points like

  • Our goal is to allow all Java devs to benefit from static analysis features that can't function without recognizing annotations in the code.
  • We'll enable that by providing an artifact of well-chosen, designed, and specified annotations that (with luck and effort) will emerge as the clear choice for Java devs to standardize on
  • Ultimately we are creating a thin concentric shell around the Java language; for all intents and purposes these will become part of the "language" developers read and write in, just as @OverRide has. This is a big responsibility and justifies having a high bar for inclusion in this project, and high standards for design/naming/etc.

Thoughts before I go further?

Decide whether and how to specify interpretation of other nullness annotations besides our own

I'm working through how our @LegacyNullness annotation would work (naming discussion see #32). In particular in the context of generics, #19, I wasn't quite sure when legacy parametricity should apply. But this seems like a general topic we need to discuss.

For clarity, I'll use @305XXX and @CAAXXX with XXX being either NotNull or @Nullable, where the 305 version stands for any of the many existing nullness annotations.

Let's say we come across some bytecode, corresponding to:

class C {
  Object fNone() ...
  @305NotNull f305NN() ...
  @CAANotNull fCAANN() ...
  @305Nullable f305Nbl() ...
  @CAANullable fCAANbl() ...
}

And no particular default applies to the class.

We currently have no mechanism to decide how this bytecode was generated.
So what is the nullness-augmented signature for this class?

class C {
  @LegacyNullable Object fNone() ...
  @??? f305NN() ...
  @CAANotNull fCAANN() ...
  @CAANullable f305Nbl() ...
  @CAANullable fCAANbl() ...
}

For fNone we don't have any annotations, so we use @LegacyNullable.

For fCAANN() we have a CAA not-null annotation and we trust it. Similarly for fCAANbl.

For f305Nbl we have a legacy nullness annotation and it would seem safe to map that to @CAANullable instead of @LegacyNullness.

However, what should be the signature for f305NN? The bytecode uses a legacy not-null annotation, but we have no reason to trust the annotation.
Some options:

  1. treat it like @LegacyNullable, completely ignoring legacy not-null annotations
  2. treat it like @CAANotNull, blindly trusting existing not-null annotations
  3. add a separate @LegacyNotNull to distinguish the cases.

Option 1. seems inconvenient, forcing all users to switch what nullness annotations they use.

Option 2. on the other hand might be dangerous, because the legacy not-null annotation might not have been checked.

Option 3. would require yet another annotation and deciding how it would be handled differently.

Are there other options?

Scope of package-level default annotations [working decision: not subpackages]

The question was initially raised by @amaembo:

If it’s applied to the package, I suggest that it should not be applied to any subpackages, so it’s required to write DefaultNotNull once per every package.

cpovirk edit much later:

(See, for example, this question, these feature requests (IntelliJ, JDK, jOOQ), these StackOverflow questions (1, 2), and this behavior in the Checker Framework (and I want to say maybe some behavior in Spring?).)

Naming the third nullness annotation (the one that matches the behavior of unannotated code)

We have seemed fairly comfortable with @Nullable and @NotNull (the latter being a shorthand for @NotNullable), but oy, the third one.

The word "legacy" should be involved; we have never come up with something more appropriate. It is the nullness that all Java types have by default before someone goes through and annotates. Admittedly, people will probably use it (rarely) for special cases with non-legacy code, but we think we want to at least discourage that.

I think we also don't want to talk about it as being more closely related to @Nullable than to @NotNull; its relationship to these two should be symmetric. It represents indecision between the two. Conservative analysis treats it more like @Nullable when assigning from, but more like @NotNull when assigning to; lenient analysis would have it the other way around.

Names

@LegacyNull / @DefaultLegacyNull
@LegacyNullness / @DefaultLegacyNullness
@NullableLegacy / @DefaultNullableLegacy (sorts/groups better)
@NullUnchecked / @DefauiltNullUnchecked (same)
@SchroedingersNull

@NotNull field semantics

Basic question: when a field is marked as @NotNull, when can it be safely assumed to be non-null?

If a field is final, then it probably becomes notnull after its initialization. But before that, other methods can be called, also via superclass constructor, and it seems an undecidable problem in general to determine if a specific field reference is done after its initialization or before it. IntelliJ does some guessing here to avoid marking if (field == null) as "always true" in such semi-initialized places, but to avoid false positives, it'd also skip useful warnings sometimes.

If a field is non-final, things get even more complicated, not even compiler complains about obvious field dereferences before its initialization. Should we even allow notnull fields not initialized before or inside all constructors? AFAIK Eclipse doesn't. IntelliJ has an option for this, and some people don't like it because they have non-null fields initialized in test set-up, or by dependency injection.

Default nullness of type parameter bound in defaulted context

[update: direct link to doc about this issue]

In the document, it's been said about @DefaultNotNull:

This annotation establishes defaults for unannotated method parameters etc. in the annotated package or class/interface (but not sub-packages or subtypes) and matches CF’s defaults except for declarations of type variables without explicit bounds, which default to non-null in this proposal

Does it mean that generic (type) parameters have not-null bounds independently of the presence of @DefaultNotNull somewhere in the scope or have I just misread the statement?

Expressing the "strictness" of enforcement

Google-side discussions of #2 lead us to the proposal below.
I wanted to write up my understanding of the discussions, to give people a chance to think about this before the meeting on Tuesday.
@kevinb9n please clarify or start with a different issue.

Basic idea

Instead of adding additional nullness annotations or adding attributes to the nullness annotations, we add additional declaration annotations that express the "strictness" of enforcement.

The default is that annotation semantics should be enforced by tools.
Specifically-marked APIs have alternative, more relaxed enforcement, without specifying exactly how that relaxed handling is done.

A conservative tool could just ignore this additional information.
Another tool could turn related errors into warnings.
Yet another tool could use runtime checks for such methods.

The particular checker (type)-annotations always have their precise semantics.
For example, a @Nullable return type means that the method can sometimes return null.
The additional declaration annotation conveys how convenient enforcing the property would be.
For example, if the method is usually expected to be non-null, a tool could chose to not enforce null checks.

Naming

We need to discuss what qualifier names would best express this intent. Some ideas:

  • @MoreEnforced / @LessEnforced
  • @StrictChecks / @LenientChecks
  • @PreventMisuse / @AllowMisuse

Scoping

As the less-enforced status should be used sparingly, we could restrict its usage to method declarations.
We would then also only need one annotation, where the implicit alternative is strict enforcement.

Alternatively, we could allow the annotation on the package/class/method level to allow specifying on all levels of granularity. We would then probably want two annotations to allow to specify possible nestings, e.g. the package is marked as less-enforced, however a class within it is strictly-enforced.

Relation to Migration

#26 proposes an @UnderMigration annotation. We feel that these express two separate dimensions.
Migration is time-bounded, whereas enforcement is about the particular API and will not change.

Strict enforcement and additional checker information

One thing to note: even for a strictly-enforced @Nullable annotation, a checker might have some additional information that tells it that in this particular use something is non-null. So strict enforcement doesn’t mean every @Nullable dereference will give an error.

For example, a checker could support pre-/post-condition annotations on a method that gives additional information in certain invocations.

Selecting which checks should be less enforced.

A blanket @LessEnforced is probably too coarse. Similar to @UnderMigration we need a way to express which checks are less enforced.

@LessEnforced(Nullable.class)
@CheckReturnValue @Nullable String getName() { ... }

would express that only the nullness part of the specification is less strict and the @CRV should still be enforced.

Like for @UnderMigration we need to discuss what the best representation for this is, e.g. class literals as above, canonical checker names, names that are also used for warning suppressions, etc.

Example

#2 contains many examples, all of which should be able to use this proposal. Let's look at findViewByID:

In the use:

setContentView(R.layout.foo);
Button button = findViewById(R.id.send);

Errors on uses of button would be inconvenient on users.

We would declare the method as:

@LessEnforced
@Nullable View findViewById(int id) {...}

The return type is still correctly annotated as @Nullable. However, the @LessEnforced tells clients of this API to be less strict about enforcing the rules, which in this case could mean to treat the method as returning a @NonNull value.

An analysis is still free to give better information, e.g. by checking proper resource IDs and warning on unknown values.

Whether to define a simplification/revision of JLS "type paths"

Our formal specifications (aimed mainly at tool authors) will have little choice but to be based on the terms and concepts of the JLS as they are written there.

We have the freedom, however, to define alternative models for our own communication, on which to base our intuitions, and importantly, on which we will base user documentation -- so long as we can maintain a mapping between these models and those of the JLS.

I think this is worth doing when we can create a model that typical users will have an easier time wrapping their heads around.

I would like to be able to think of compound types as being a simple tree of type components, where every node is a bona fide type, and where the edge types are one of:

  • Parent uses child as type argument #N
  • Parent uses child as type argument #N with ? extends
  • Parent uses child as type argument #N with ? super
  • Parent is an array (or varargs parameter) whose component type is the child
  • Parent is a inner type whose nearest outer type is the child

This requires some distance from the JLS concept of a "type path", which I'm told treats a wildcard itself as a node, and reverses the relationship of inner to outer types.

I see this model as simplifying because intuitively the type trees for List<String>, List<? extends String>, and String[] ought to resemble each other. And I would like us to be able to think of inner types by mapping them simply onto something else we all know, almost as if each one extended this common supertype:

interface InnerType<O> {
  O myOuterInstance();
}

The model just suggested may well be too naive. The best model is the one that it is as naive as it can be, but no more naive than that. That is, we should start as idealistic as possible and corrupt the model with more complexity if and when it really is important.

If we can agree on a version of the above that is an acceptable model for our purposes, or agree to stick with JLS type paths, that should resolve this particular issue.

Textual explanation for nullability annotations

The question was initially raised by @amaembo:

JetBrains Nullable annotation has a textual explanation property which says when value could be null. E.g. @nullable("when logged in anonymously") String getUserName(). The message could be used by static analyser to produce more friendly warnings. E.g. “calling .trim() on result of getUserName() may cause NullPointerException: when logged in anonymously”. Do we need these (not insisting)?

Accommodate users of well-known co(ntra)variant types who don't use wildcards? [working decision: no]

Code accepting a Supplier<Foo> and Predicate<Bar> should basically always accept Supplier<? extends Foo> and Predicate<? super Bar> instead. This is well-plumbed territory (see EJ3e Item 31), and as far as we are concerned, it's non-controversial: you can avoid the wildcard if you want, but you're going to have a bad time with or without us and we won't feel sorry for you. (We'll continue to dream of declaration-site variance in a future version of Java.)

Fortunately our nullness annotations seem to intersect pretty sanely with these wildcards - they basically do what you would want them to do. But what of the users who don't want to use them?

We may want to think about what checkers should do in these cases, so we can consider what boundaries we may want to place on what a compliant checker can do.

  1. Do nothing to accommodate this case

  2. Or be lenient about assigning in either direction between Abc<@NotNull Xyz> and Abc<@Nullable Xyz>, though this may be a sizable hole in the safety net

  3. Or actually maintain special knowledge of common types that are known to be effective co(ntra)variant, and use that knowledge accordingly for our purposes.

I personally favor 1.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.