Giter Site home page Giter Site logo

ahoo-wang / cosid Goto Github PK

View Code? Open in Web Editor NEW
428.0 10.0 70.0 9.54 MB

Universal, flexible, high-performance distributed ID generator. | 通用、灵活、高性能的分布式 ID 生成器

Home Page: https://cosid.ahoo.me

License: Apache License 2.0

Java 98.49% Lua 0.56% Shell 0.10% TypeScript 0.71% JavaScript 0.07% Dockerfile 0.07%
distributed id idgenerator snowflake kubernetes k8s cloud-native redis id-generator generator

cosid's Introduction

CosId Universal, flexible, high-performance distributed ID generator

License GitHub release Maven Central Codacy Badge codecov Integration Test Status

中文文档

Introduction

CosId aims to provide a universal, flexible and high-performance distributed ID generator.

  • CosIdGenerator : Stand-alone TPS performance:15,570,085 ops/s , three times that of UUID.randomUUID(),global trend increasing based-time.
  • SnowflakeId : Stand-alone TPS performance:4,096,000 ops/s JMH Benchmark , It mainly solves two major problems of SnowflakeId: machine number allocation problem and clock backwards problem and provide a more friendly and flexible experience.
  • SegmentId: Get a segment (Step) ID every time to reduce the network IO request frequency of the IdSegment distributor and improve performance.
    • IdSegmentDistributor:
      • RedisIdSegmentDistributor: IdSegment distributor based on Redis.
      • JdbcIdSegmentDistributor: The Jdbc-based IdSegment distributor supports various relational databases.
      • ZookeeperIdSegmentDistributor: IdSegment distributor based on Zookeeper.
      • MongoIdSegmentDistributor: IdSegment distributor based on MongoDB.
  • SegmentChainId(recommend):SegmentChainId (lock-free) is an enhancement of SegmentId, the design diagram is as follows. PrefetchWorker maintains a safe distance, so that SegmentChainId achieves approximately AtomicLong TPS performance: 127,439,148+ ops/s JMH Benchmark .
    • PrefetchWorker maintains a safe distance (safeDistance), and supports dynamic safeDistance expansion and contraction based on hunger status.

SnowflakeId

Snowflake

SnowflakeId is a distributed ID algorithm that uses Long (64-bit) bit partition to generate ID. The general bit allocation scheme is : timestamp (41-bit) + machineId (10-bit) + sequence (12-bit) = 63-bit。

  • 41-bit timestamp = (1L<<41)/(1000/3600/24/365) approximately 69 years of timestamp can be stored, that is, the usable absolute time is EPOCH + 69 years. Generally, we need to customize EPOCH as the product development time. In addition, we can increase the number of allocated bits by compressing other areas, The number of timestamp bits to extend the available time.
  • 10-bit machineId = (1L<<10) = 1024 That is, 1024 copies of the same business can be deployed (there is no master-slave copy in the Kubernetes concept, and the definition of Kubernetes is directly used here) instances. Generally, there is no need to use so many, so it will be redefined according to the scale of deployment.
  • 12-bit sequence = (1L<<12) * 1000 = 4096000 That is, a single machine can generate about 409W ID per second, and a global same-service cluster can generate 4096000*1024=4194304000=4.19 billion (TPS).

It can be seen from the design of SnowflakeId:

  • 👍 The first 41-bit are a timestamp,So SnowflakeId is local monotonically increasing, and affected by global clock synchronization SnowflakeId is global trend increasing.
  • 👍 SnowflakeId does not have a strong dependency on any third-party middleware, and its performance is also very high.
  • 👍 The bit allocation scheme can be flexibly configured according to the needs of the business system to achieve the optimal use effect.
  • 👎 Strong reliance on the local clock, potential clock moved backwards problems will cause ID duplication.
  • 👎 The machineId needs to be set manually. If the machineId is manually assigned during actual deployment, it will be very inefficient.

CosId-SnowflakeId

It mainly solves two major problems of SnowflakeId: machine number allocation problem and clock backwards problem and provide a more friendly and flexible experience.

MachineIdDistributor

Currently CosId provides the following three MachineId distributors.

ManualMachineIdDistributor

cosid:
  snowflake:
    machine:
      distributor:
        type: manual
        manual:
          machine-id: 0

Manually distribute MachineId

StatefulSetMachineIdDistributor

cosid:
  snowflake:
    machine:
      distributor:
        type: stateful_set

Use the stable identification ID provided by the StatefulSet of Kubernetes as the machine number.

RedisMachineIdDistributor

Redis Machine Id Distributor

Machine Id Safe Guard

cosid:
  snowflake:
    machine:
      distributor:
        type: redis

Use Redis as the distribution store for the machine number.

ClockBackwardsSynchronizer

cosid:
  snowflake:
    clock-backwards:
      spin-threshold: 10
      broken-threshold: 2000

The default DefaultClockBackwardsSynchronizer clock moved backwards synchronizer uses active wait synchronization strategy, spinThreshold (default value 10 milliseconds) is used to set the spin wait threshold, when it is greater than spinThreshold, use thread sleep to wait for clock synchronization, if it exceeds BrokenThreshold (default value 2 seconds) will directly throw a ClockTooManyBackwardsException exception.

MachineStateStorage

public class MachineState {
    public static final MachineState NOT_FOUND = of(-1, -1);
    private final int machineId;
    private final long lastTimeStamp;
    
    public MachineState(int machineId, long lastTimeStamp) {
        this.machineId = machineId;
        this.lastTimeStamp = lastTimeStamp;
    }
    
    public int getMachineId() {
        return machineId;
    }
    
    public long getLastTimeStamp() {
        return lastTimeStamp;
    }
    
    public static MachineState of(int machineId, long lastStamp) {
        return new MachineState(machineId, lastStamp);
    }
}
cosid:
  snowflake:
    machine:
      state-storage:
        local:
          state-location: ./cosid-machine-state/

The default LocalMachineStateStorage local machine state storage uses a local file to store the machine number and the most recent timestamp, which is used as a MachineState cache.

ClockSyncSnowflakeId

cosid:
  snowflake:
    share:
      clock-sync: true

The default SnowflakeId will directly throw a ClockBackwardsException when a clock moved backwards occurs, while using the ClockSyncSnowflakeId will use the ClockBackwardsSynchronizer to actively wait for clock synchronization to regenerate the ID, providing a more user-friendly experience.

SafeJavaScriptSnowflakeId

SnowflakeId snowflakeId=SafeJavaScriptSnowflakeId.ofMillisecond(1);

The Number.MAX_SAFE_INTEGER of JavaScript has only 53-bit. If the 63-bit SnowflakeId is directly returned to the front end, the value will overflow. Usually we can convert SnowflakeId to String type or customize SnowflakeId Bit allocation is used to shorten the number of bits of SnowflakeId so that ID does not overflow when it is provided to the front end.

SnowflakeFriendlyId (Can parse SnowflakeId into a more readable SnowflakeIdState)

cosid:
  snowflake:
    share:
      friendly: true
public class SnowflakeIdState {
    
    private final long id;
    
    private final int machineId;
    
    private final long sequence;
    
    private final LocalDateTime timestamp;
    /**
     * {@link #timestamp}-{@link #machineId}-{@link #sequence}
     */
    private final String friendlyId;
}
public interface SnowflakeFriendlyId extends SnowflakeId {
    
    SnowflakeIdState friendlyId(long id);
    
    SnowflakeIdState ofFriendlyId(String friendlyId);
    
    default SnowflakeIdState friendlyId() {
        long id = generate();
        return friendlyId(id);
    }
}
    SnowflakeFriendlyId snowflakeFriendlyId=new DefaultSnowflakeFriendlyId(snowflakeId);
    SnowflakeIdState idState=snowflakeFriendlyId.friendlyId();
    idState.getFriendlyId(); //20210623131730192-1-0

SegmentId

Segment Id

RedisIdSegmentDistributor

cosid:
  segment:
    enabled: true
    distributor:
      type: redis

JdbcIdSegmentDistributor

Initialize the cosid table

create table if not exists cosid
(
    name            varchar(100) not null comment '{namespace}.{name}',
    last_max_id     bigint       not null default 0,
    last_fetch_time bigint       not null,
    constraint cosid_pk
        primary key (name)
) engine = InnoDB;
spring:
  datasource:
    url: jdbc:mysql://localhost:3306/test_db
    username: root
    password: root
cosid:
  segment:
    enabled: true
    distributor:
      type: jdbc
      jdbc:
        enable-auto-init-cosid-table: false
        enable-auto-init-id-segment: true

After enabling enable-auto-init-id-segment:true, the application will try to create the idSegment record when it starts to avoid manual creation. Similar to the execution of the following initialization sql script, there is no need to worry about misoperation, because name is the primary key.

insert into cosid
    (name, last_max_id, last_fetch_time)
    value
    ('namespace.name', 0, unix_timestamp());

SegmentChainId

SegmentChainId

cosid:
  segment:
    enabled: true
    mode: chain
    chain:
      safe-distance: 5
      prefetch-worker:
        core-pool-size: 2
        prefetch-period: 1s
    distributor:
      type: redis
    share:
      offset: 0
      step: 100
    provider:
      bizC:
        offset: 10000
        step: 100
      bizD:
        offset: 10000
        step: 100

IdGeneratorProvider

cosid:
  snowflake:
    provider:
      bizA:
        #      timestamp-bit:
        sequence-bit: 12
      bizB:
        #      timestamp-bit:
        sequence-bit: 12
IdGenerator idGenerator=idGeneratorProvider.get("bizA");

In actual use, we generally do not use the same IdGenerator for all business services, but different businesses use different IdGenerator, then IdGeneratorProvider exists to solve this problem, and it is the container of IdGenerator , You can get the corresponding IdGenerator by the business name.

CosIdPlugin (MyBatis Plugin)

Kotlin DSL

    implementation("me.ahoo.cosid:cosid-mybatis:${cosidVersion}")
@Target({ElementType.FIELD})
@Documented
@Retention(RetentionPolicy.RUNTIME)
public @interface CosId {
    String value() default IdGeneratorProvider.SHARE;
    
    boolean friendlyId() default false;
}
public class LongIdEntity {
    
    @CosId(value = "safeJs")
    private Long id;
    
    public Long getId() {
        return id;
    }
    
    public void setId(Long id) {
        this.id = id;
    }
}

public class FriendlyIdEntity {
    
    @CosId(friendlyId = true)
    private String id;
    
    public String getId() {
        return id;
    }
    
    public void setId(String id) {
        this.id = id;
    }
}
@Mapper
public interface OrderRepository {
    @Insert("insert into t_table (id) value (#{id});")
    void insert(LongIdEntity order);
    
    @Insert({
        "<script>",
        "insert into t_friendly_table (id)",
        "VALUES" +
            "<foreach item='item' collection='list' open='' separator=',' close=''>" +
            "(#{item.id})" +
            "</foreach>",
        "</script>"})
    void insertList(List<FriendlyIdEntity> list);
}
        LongIdEntity entity=new LongIdEntity();
    entityRepository.insert(entity);
    /**
     * {
     *   "id": 208796080181248
     * }
     */
    return entity;

ShardingSphere Plugin

cosid-shardingsphere

CosIdKeyGenerateAlgorithm (Distributed-Id)

spring:
  shardingsphere:
    rules:
      sharding:
        key-generators:
          cosid:
            type: COSID
            props:
              id-name: __share__

Interval-based time range sharding algorithm

CosIdIntervalShardingAlgorithm

  • Ease of use: supports multiple data types (Long/LocalDateTime/DATE/ String / SnowflakeId),The official implementation is to first convert to a string and then convert to LocalDateTime, the conversion success rate is affected by the time formatting characters.
  • Performance: Compared to org.apache.shardingsphere.sharding.algorithm.sharding.datetime.IntervalShardingAlgorithm ,The performance is 1200~4000 times higher.
PreciseShardingValue RangeShardingValue
Throughput Of IntervalShardingAlgorithm - PreciseShardingValue Throughput Of IntervalShardingAlgorithm - RangeShardingValue
  • CosIdIntervalShardingAlgorithm
    • type: COSID_INTERVAL
spring:
  shardingsphere:
    rules:
      sharding:
        sharding-algorithms:
          alg-name:
            type: COSID_INTERVAL
            props:
              logic-name-prefix: logic-name-prefix
              id-name: cosid-name
              datetime-lower: 2021-12-08 22:00:00
              datetime-upper: 2022-12-01 00:00:00
              sharding-suffix-pattern: yyyyMM
              datetime-interval-unit: MONTHS
              datetime-interval-amount: 1

CosIdModShardingAlgorithm

CosId Mod Sharding Algorithm

  • Performance: Compared to org.apache.shardingsphere.sharding.algorithm.sharding.datetime.IntervalShardingAlgorithm ,The performance is 1200~4000 times higher.And it has higher stability and no serious performance degradation.
PreciseShardingValue RangeShardingValue
Throughput Of ModShardingAlgorithm - PreciseShardingValue Throughput Of ModShardingAlgorithm - RangeShardingValue
spring:
  shardingsphere:
    rules:
      sharding:
        sharding-algorithms:
          alg-name:
            type: COSID_MOD
            props:
              mod: 4
              logic-name-prefix: t_table_

Examples

项目中根据使用的场景(jdbc/proxy/redis-cosid/redis/shardingsphere/zookeeper等)提供了对应的例子,实践过程中可以参照配置快速接入。

点击查看Examples

Installation

Gradle

Kotlin DSL

    val cosidVersion = "1.14.5";
    implementation("me.ahoo.cosid:cosid-spring-boot-starter:${cosidVersion}")

Maven

<?xml version="1.0" encoding="UTF-8"?>

<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

    <modelVersion>4.0.0</modelVersion>
    <artifactId>demo</artifactId>
    <properties>
        <cosid.version>1.14.5</cosid.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>me.ahoo.cosid</groupId>
            <artifactId>cosid-spring-boot-starter</artifactId>
            <version>${cosid.version}</version>
        </dependency>
    </dependencies>

</project>

application.yaml

spring:
  shardingsphere:
    datasource:
      names: ds0,ds1
      ds0:
        type: com.zaxxer.hikari.HikariDataSource
        driver-class-name: com.mysql.cj.jdbc.Driver
        jdbcUrl: jdbc:mysql://localhost:3306/cosid_db_0
        username: root
        password: root
      ds1:
        type: com.zaxxer.hikari.HikariDataSource
        driver-class-name: com.mysql.cj.jdbc.Driver
        jdbcUrl: jdbc:mysql://localhost:3306/cosid_db_1
        username: root
        password: root
    props:
      sql-show: true
    rules:
      sharding:
        binding-tables:
          - t_order,t_order_item
        tables:
          cosid:
            actual-data-nodes: ds0.cosid
          t_table:
            actual-data-nodes: ds0.t_table_$->{0..1}
            table-strategy:
              standard:
                sharding-column: id
                sharding-algorithm-name: table-inline
          t_date_log:
            actual-data-nodes: ds0.t_date_log_202112
            key-generate-strategy:
              column: id
              key-generator-name: snowflake
            table-strategy:
              standard:
                sharding-column: create_time
                sharding-algorithm-name: data-log-interval
        sharding-algorithms:
          table-inline:
            type: COSID_MOD
            props:
              mod: 2
              logic-name-prefix: t_table_
          data-log-interval:
            type: COSID_INTERVAL
            props:
              logic-name-prefix: t_date_log_
              datetime-lower: 2021-12-08 22:00:00
              datetime-upper: 2022-12-01 00:00:00
              sharding-suffix-pattern: yyyyMM
              datetime-interval-unit: MONTHS
              datetime-interval-amount: 1
        key-generators:
          snowflake:
            type: COSID
            props:
              id-name: snowflake


cosid:
  namespace: ${spring.application.name}
  machine:
    enabled: true
    #      stable: true
    #      machine-bit: 10
    #      instance-id: ${HOSTNAME}
    distributor:
      type: redis
    #        manual:
    #          machine-id: 0
  snowflake:
    enabled: true
    #    epoch: 1577203200000
    clock-backwards:
      spin-threshold: 10
      broken-threshold: 2000
    share:
      clock-sync: true
      friendly: true
    provider:
      order_item:
        #        timestamp-bit:
        sequence-bit: 12
      snowflake:
        sequence-bit: 12
      safeJs:
        machine-bit: 3
        sequence-bit: 9
  segment:
    enabled: true
    mode: chain
    chain:
      safe-distance: 5
      prefetch-worker:
        core-pool-size: 2
        prefetch-period: 1s
    distributor:
      type: redis
    share:
      offset: 0
      step: 100
    provider:
      order:
        offset: 10000
        step: 100
      longId:
        offset: 10000
        step: 100

JMH-Benchmark

  • The development notebook : MacBook Pro (M1)
  • All benchmark tests are carried out on the development notebook.
  • Deploying Redis on the development notebook.

SnowflakeId

gradle cosid-core:jmh
# or
java -jar cosid-core/build/libs/cosid-core-1.14.5-jmh.jar -bm thrpt -wi 1 -rf json -f 1
Benchmark                                                    Mode  Cnt        Score   Error  Units
SnowflakeIdBenchmark.millisecondSnowflakeId_friendlyId      thrpt       4020311.665          ops/s
SnowflakeIdBenchmark.millisecondSnowflakeId_generate        thrpt       4095403.859          ops/s
SnowflakeIdBenchmark.safeJsMillisecondSnowflakeId_generate  thrpt        511654.048          ops/s
SnowflakeIdBenchmark.safeJsSecondSnowflakeId_generate       thrpt        539818.563          ops/s
SnowflakeIdBenchmark.secondSnowflakeId_generate             thrpt       4206843.941          ops/s

Throughput (ops/s) of SegmentChainId

Throughput-Of-SegmentChainId

Percentile-Sample (P9999=0.208 us/op) of SegmentChainId

In statistics, a percentile (or a centile) is a score below which a given percentage of scores in its frequency distribution falls (exclusive definition) or a score at or below which a given percentage falls (inclusive definition). For example, the 50th percentile (the median) is the score below which (exclusive) or at or below which (inclusive) 50% of the scores in the distribution may be found.

Percentile-Sample-Of-SegmentChainId

CosId VS MeiTuan Leaf

CosId (SegmentChainId) is 5 times faster than Leaf(segment).

CosId VS MeiTuan Leaf

Community Partners and Sponsors

cosid's People

Contributors

ahoo-wang avatar codacy-badger avatar cubita-io avatar fxbin avatar ji602089383 avatar lunztech avatar renovate[bot] avatar rocherkong avatar xinyuranyan avatar yaien6530 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cosid's Issues

spring boot 高版本运行异常

Bug Report

 demo测试  spring boot 2.6.6 运行异常 ,spring boot 2.5.12 运行正常
 部分异常日志:
 1、description = 'Spring Boot [2.6.6] is not compatible with this Spring Cloud release train', action = 'Change Spring Boot version to one of the following versions [2.4.x, 2.5.x] .
  2、If you want to disable this check, just set the property [spring.cloud.compatibility-verifier.enabled=false]']]

Before report a bug, make sure you have:

Please pay attention on issues you submitted, because we maybe need more details.
If no response anymore and we cannot reproduce it on current information, we will close it.

Please answer these questions before submitting your issue. Thanks!

Which version of CosId did you use?

Expected behavior

Actual behavior

Reason analyze (If you can)

猜测:me.ahoo.cosid:cosid-spring-boot-starter 引用 org.springframework.cloud:spring-cloud-commons 导致的版本不兼容

Steps to reproduce the behavior

Example codes for reproduce this issue (such as a github link).

SecondSnowflakeId 构造函数 epoch 默认值错误

Bug Report

Before report a bug, make sure you have:

Which version of CosId did you use?

1.10.0

Reason analyze (If you can)

    public SecondSnowflakeId(long machineId) {
        this(CosId.COSID_EPOCH_SECOND, DEFAULT_TIMESTAMP_BIT, DEFAULT_MACHINE_BIT, DEFAULT_SEQUENCE_BIT, machineId);
    }

    public SecondSnowflakeId(int machineBit, long machineId) {
        super(CosId.COSID_EPOCH, DEFAULT_TIMESTAMP_BIT, machineBit, DEFAULT_SEQUENCE_BIT, machineId);
    }

这两个构造函数的epoch值不同,第二个应该使用CosId.COSID_EPOCH_SECOND

禁用 state-storage, 关闭应用时仍然会调用 resetStorage,导致报错 NotFoundMachineStateException

Bug Report

Before report a bug, make sure you have:

Please pay attention on issues you submitted, because we maybe need more details.
If no response anymore and we cannot reproduce it on current information, we will close it.

Please answer these questions before submitting your issue. Thanks!

Which version of CosId did you use?

1.13.1

Expected behavior

正常结束

Actual behavior

报错

me.ahoo.cosid.machine.NotFoundMachineStateException: Not found the MachineState of instance[InstanceId{instanceId=xxx:xxx, stable=true}]@[{cosid}]!
	at me.ahoo.cosid.machine.AbstractMachineIdDistributor.resetStorage(AbstractMachineIdDistributor.java:100) ~[cosid-core-1.13.1.jar:?]
	at me.ahoo.cosid.machine.AbstractMachineIdDistributor.revert(AbstractMachineIdDistributor.java:80) ~[cosid-core-1.13.1.jar:?]
	at me.ahoo.cosid.spring.boot.starter.machine.CosIdLifecycleMachineIdDistributor.stop(CosIdLifecycleMachineIdDistributor.java:49) ~[cosid-spring-boot-starter-1.13.1.jar:?]
	at org.springframework.context.SmartLifecycle.stop(SmartLifecycle.java:117) ~[spring-context-5.3.21.jar:5.3.21]
	at org.springframework.context.support.DefaultLifecycleProcessor.doStop(DefaultLifecycleProcessor.java:234) ~[spring-context-5.3.21.jar:5.3.21]
	at org.springframework.context.support.DefaultLifecycleProcessor.access$300(DefaultLifecycleProcessor.java:54) ~[spring-context-5.3.21.jar:5.3.21]
	at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.stop(DefaultLifecycleProcessor.java:373) ~[spring-context-5.3.21.jar:5.3.21]
	at org.springframework.context.support.DefaultLifecycleProcessor.stopBeans(DefaultLifecycleProcessor.java:206) ~[spring-context-5.3.21.jar:5.3.21]
	at org.springframework.context.support.DefaultLifecycleProcessor.onClose(DefaultLifecycleProcessor.java:129) ~[spring-context-5.3.21.jar:5.3.21]
	at org.springframework.context.support.AbstractApplicationContext.doClose(AbstractApplicationContext.java:1067) ~[spring-context-5.3.21.jar:5.3.21]
	at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.doClose(ServletWebServerApplicationContext.java:174) ~[spring-boot-2.7.1.jar:2.7.1]
	at org.springframework.context.support.AbstractApplicationContext.close(AbstractApplicationContext.java:1021) ~[spring-context-5.3.21.jar:5.3.21]
	at org.springframework.boot.SpringApplicationShutdownHook.closeAndWait(SpringApplicationShutdownHook.java:145) ~[spring-boot-2.7.1.jar:2.7.1]
	at java.lang.Iterable.forEach(Iterable.java:75) ~[?:?]
	at org.springframework.boot.SpringApplicationShutdownHook.run(SpringApplicationShutdownHook.java:114) ~[spring-boot-2.7.1.jar:2.7.1]
	at java.lang.Thread.run(Thread.java:833) ~[?:?]

Reason analyze (If you can)

Steps to reproduce the behavior

将 state-storage 设置为 false

Example codes for reproduce this issue (such as a github link).

cosid:
  machine:
    enabled: true
    stable: true
    state-storage:
      enabled: false
    distributor:
      type: manual
      manual:
        machine-id: 0
    guarder:
      enabled: false
    clock-backwards:
      spin-threshold: 1
      broken-threshold: 10000
  snowflake:
    enabled: true
    epoch: 1659283200000
    share:
      clock-sync: true
      friendly: true

支持生产日期时间+xx

订单号通常是日期时间+xxx,比如20220520142030+87663233, 请教这个生产这种格式的序列号支持吗?

Help specific documents

Hello, in combination with whether there are specific documents for springboot, I can see that there are some configurations in readme, but there is no specific description for many springboot configurations. Can you provide a specific configuration description? Furthermore, the description of different packages in Maven warehouse. thank you

And are there best practices.
Moreover, the cosid-rest-api failed

It's better to return consistent sub-type of Collection in IntervalTimeline.sharding method

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

CosId was added in Apache ShardingSphere recently, when I'm running shardingsphere-sharding-core module's unit test in IDEA, it has some failed cases of CosIdIntervalShardingAlgorithmTest which are related with IntervalTimeline.sharding. Failed result:
图片

Related code:

        @Test
        public void assertDoSharding() {
            RangeShardingValue shardingValue = new RangeShardingValue<>(LOGIC_NAME, COLUMN_NAME, rangeValue);
            Collection<String> actual = shardingAlgorithm.doSharding(ALL_NODES, shardingValue);
            assertThat(actual, is(expected));
        }

In fact, actual and expected has the same elements, but they're different Collection's sub-type, so they're not equals. Debug screenshot:
WX20220108-124508@2x

actual is instance of Collections$SingletonSet, but expected is instance of ExactCollection.

Describe the solution you'd like
A clear and concise description of what you want to happen.

In IntervalTimeline.java

    @Override
    public Collection<String> sharding(Range<LocalDateTime> shardingValue) {
//...
        if (lowerOffset == upperOffset) {
            return Collections.singleton(lastInterval.getNode());
        }
//...
        ExactCollection<String> nodes = new ExactCollection<>(nodeSize);
//...
        return nodes;
    }

Collections.singleton(lastInterval.getNode()) could be replaced to return ExactCollection or , just like other return statements.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

It's possible to fix it by changingexpected in CosIdIntervalShardingAlgorithmTest.rangeArgsProvider, but it's a little strange, users should not concern about the implementation of IntervalTimeline.sharding.

Additional context
Add any other context or screenshots about the feature request here.

IntervalTimeline类,当分片步长单位为HOURS时,分片结果不对

Bug Report

Before report a bug, make sure you have:

Please pay attention on issues you submitted, because we maybe need more details.
If no response anymore and we cannot reproduce it on current information, we will close it.

Please answer these questions before submitting your issue. Thanks!

Which version of CosId did you use?

ShardingSphere 5.20 ,CosId 1.14.4

Expected behavior

期望输出 table_20221015_06

Actual behavior

实际输出 table_20221015_00

Reason analyze (If you can)

源代码

  /**
     * 计算单位偏移量.
     * Start with 0
     *
     * @param start 最小值
     * @param time  time
     * @return offset
     */
    public int offsetUnit(LocalDateTime start, LocalDateTime time) {
        return getDiffUnit(start, time) / amount;
    }

    private int getDiffUnit(LocalDateTime startInterval, LocalDateTime time) {
        switch (unit) {
            case YEARS: {
                return getDiffYear(startInterval, time);
            }
            case MONTHS: {
                return getDiffYearMonth(startInterval, time);
            }
            case DAYS: {
                return getDiffYearMonthDay(startInterval, time);
            }
            case HOURS: {
               //这里直接取的天数*24换算成小时,导致不足一天hours的精度丢失,偏移量计算不正确
                return getDiffYearMonthDay(startInterval, time) * 24;
            }
            case MINUTES: {
                return getDiffYearMonthDay(startInterval, time) * 24 * 60;
            }
            case SECONDS: {
                return getDiffYearMonthDay(startInterval, time) * 24 * 60 * 60;
            }
            default:
                throw new IllegalStateException("Unexpected value: " + unit);
        }
    }

Steps to reproduce the behavior

Example codes for reproduce this issue (such as a github link).

    public static void main(String[] args) {
        DateTimeFormatter dateTimeFormatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
        String lower = "2022-10-10 00:00:00";
        String upper = "2022-10-16 00:00:00";
        LocalDateTime lowerDate = LocalDateTime.parse(lower, dateTimeFormatter);
        LocalDateTime upperDate = LocalDateTime.parse(upper, dateTimeFormatter);
        IntervalTimeline timeline = new IntervalTimeline("table_", Range.closed(lowerDate, upperDate), IntervalStep.of(ChronoUnit.HOURS, 6), DateTimeFormatter.ofPattern("yyyyMMdd_HH"));
        LocalDateTime shardingValue = LocalDateTime.parse("2022-10-15 11:40:00", dateTimeFormatter);
        String table = timeline.sharding(shardingValue);
        System.out.println(table);
    }

Add `cosid-test` for spec implementation.

应用宕机,对应zk节点不会被删除

假如应用宕机或者直接被kill掉,zk对应的节点不会被删除,导致应用在zk生成的节点会生成越来越多,宕机一次多生成一次。为什么这个节点不做成临时节点?这样宕机节点也会被删除。

image

IdSegmentDistributor是否可以对外扩展出一个listener,方便使用者自己扩展nextMaxId的存储?

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

这个版本的雪花跑批量循环生成测试第一个生成的ID不是有序步长递增的 而是从第二个开始才是递增的

Bug Report

跑循环生成测试第一个生成的ID不是有序步长递增的,我跑的测试很简单 这个是为何?望解答 thanks. 我跑了很多次基本都是这个结果

    public static void main(String[] args) {

//      epoch和 machineId做了修改  其他都是默认值    
        MillisecondSnowflakeId millisecondSnowflakeId = new MillisecondSnowflakeId(1577836800000L, 41, 10, 12, 3);
        for (int i = 0; i < 100; i++) {
            long generate = millisecondSnowflakeId.generate();
            System.out.println(generate);
        }
    }

跑出来的结果是:

# 第一个ID生成和第二个没有递增关系  从第二个开始 3 4 5一直往后才有递增关系

321493968892211200 

321493968896405504
321493968896405505
321493968896405506
321493968896405507
321493968896405508
321493968900599808
321493968900599809
321493968900599810
321493968900599811
321493968900599812
321493968900599813
321493968900599814
321493968900599815
321493968900599816
321493968900599817
321493968900599818
321493968900599819
321493968900599820
321493968900599821
321493968900599822
321493968900599823
321493968900599824
321493968900599825
321493968900599826
321493968900599827
321493968900599828
321493968900599829
321493968900599830
321493968900599831
321493968900599832
321493968900599833
321493968900599834
321493968900599835
321493968900599836
321493968900599837
321493968900599838
321493968900599839
321493968900599840
321493968900599841
321493968900599842
321493968900599843
321493968900599844
321493968904794112
321493968904794113
321493968904794114
321493968904794115
321493968904794116
321493968904794117
321493968904794118
321493968904794119
321493968904794120
321493968904794121
321493968904794122
321493968904794123
321493968904794124
321493968904794125
321493968904794126
321493968904794127
321493968904794128
321493968904794129
321493968904794130
321493968904794131
321493968904794132
321493968904794133
321493968904794134
321493968904794135
321493968904794136
321493968904794137
321493968904794138
321493968904794139
321493968904794140
321493968904794141
321493968904794142
321493968904794143
321493968908988416
321493968908988417
321493968908988418
321493968908988419
321493968908988420
321493968908988421
321493968908988422
321493968908988423
321493968908988424
321493968908988425
321493968908988426
321493968908988427
321493968908988428
321493968908988429
321493968908988430
321493968908988431
321493968908988432
321493968908988433
321493968908988434
321493968908988435
321493968908988436
321493968908988437
321493968908988438
321493968908988439
321493968908988440


Which version of CosId did you use?

1.12.0

mybatis id

1.7.9

cosid:
  namespace: ${spring.application.name}
  segment:
    enabled: true
    mode: chain
    chain:
      safe-distance: 5
      prefetch-worker:
        core-pool-size: 2
        prefetch-period: 1s
    distributor:
      type: redis
    share:
      offset: 5
      step: 100
      converter:
        prefix:
        type: radix
        radix:
          char-size: 6
          pad-start: false
    provider:
      order:
        offset: 1
        step: 100
@Cosid(value="order")
private Long id;

id指定了providerorder, mybatis实际插入数据库为何每次都是按照shareoffset开始插入, 是用的姿势不对吗?

     new User(null, "张三");
   
    @Insert(value = "insert into user values (#{id} ,#{name} )")
    void save(User xx);

Invalid file path

Error creating bean with name 'shareSnowflakeId' defined in class path resource [me/ahoo/cosid/spring/boot/starter/snowflake/CosIdSnowflakeAutoConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [me.ahoo.cosid.snowflake.SnowflakeId]: Factory method 'shareSnowflakeId' threw exception; nested exception is me.ahoo.cosid.CosIdException: java.io.IOException: Invalid file path

使用redis模式为分发器,阿里云数据库Redis在初始化执行lua脚本的时候报脚本错误

Which version of CosId did you use?

1.12.1

Bug Report

之前 单机、主从的redis都正常

我们的测试环境使用的docker部署的单机的redis,线上和UAT环境使用的阿里云的云产品【云数据库Redis版】,集群5.0版本。当我们准备部署上阿里云上的时候服务启动的时候报了lua脚本执行错误。
如下:

[2022-07-07 16:06:40.182] [INFO ] [SWTraceId - TID: N/A] [] [me.ahoo.cosid.spring.redis.SpringRedisMachineIdDistributor] - distributeRemote - instanceId:[InstanceId{instanceId=172.28.25.39:7, stable=false}] - machineBit:[10] @ namespace:[xxxx-service]. 
[2022-07-07 16:06:40.300] [WARN ] [SWTraceId - TID: N/A] [] [org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext] - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Unable to start web server; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.boot.autoconfigure.web.servlet.ServletWebServerFactoryConfiguration$EmbeddedTomcat': Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'sqlSessionFactory' defined in class path resource [com/baomidou/mybatisplus/autoconfigure/MybatisPlusAutoConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.apache.ibatis.session.SqlSessionFactory]: Factory method 'sqlSessionFactory' threw exception; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'shareSnowflakeId' defined in class path resource [me/ahoo/cosid/spring/boot/starter/snowflake/CosIdSnowflakeAutoConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [me.ahoo.cosid.snowflake.SnowflakeId]: Factory method 'shareSnowflakeId' threw exception; nested exception is org.springframework.dao.InvalidDataAccessApiUsageException: ERR bad lua script for redis cluster, all the keys that the script uses should be passed using the KEYS array, and KEYS should not be in expression. channel: [id: 0x7f4f67ec, L:/172.28.25.39:43428 - R:r-bp1usnu4324n3247.redis.rds.aliyuncs.com/12.9.0.43:579] command: (EVAL), params: [[45, 45, 32, 105, 116, 99, 95, 105, 100, 120, ...], 1, [123, 112, 105, 109, 115, 45, 102, 105, 110, 97, ...], [49, 55, 50 
, 46, 50, 56, 46, 50, 53, 46, ...], [49, 48, 50, 51], [49, 54, 53, 55, 49, 56, 49, 50, 48, 48, ...], [49, 54, 53, 55, 49, 56, 48, 57, 48, 48, ...]]; nested exception is org.redisson.client.RedisException: ERR bad lua script for redis cluster, all the keys that the script uses should be passed using the KEYS array, and KEYS should not be in expression. channel: [id: 0x7f4f67ec, L:/172.28.25.39:43428 - R:r-adsadsadsad.redis.rds.aliyuncs.com/12.9.0.1213:349] command: (EVAL), params: [[45, 45, 32, 105, 116, 99, 95, 105, 100, 120, ...], 1, [123, 112, 105, 109, 115, 45, 102, 105, 110, 97, ...], [49, 55, 50, 46, 50, 56, 46, 50, 53, 46, ...], [49, 48, 50, 51], [49, 54, 53, 55, 49, 56, 49, 50, 48, 48, ...], [49, 54, 53, 55, 49, 56, 48, 57, 48, 48, ...]] 
[2022-07-07 16:06:40.345] [WARN ] [SWTraceId - TID: N/A] [] [org.springframework.context.annotation.CommonAnnotationBeanPostProcessor] - Destroy method on bean with name 'lifecycleBootstrap' threw an exception: org.springframework.beans.factory.BeanCreationNotAllowedException: Error creating bean with name 'redisClientFactory': Singleton bean creation not allowed while singletons of this factory are in destruction (Do not request a bean from a BeanFactory in a destroy method implementation!) 
[2022-07-07 16:06:40.357] [INFO ] [SWTraceId - TID: N/A] [] [org.springframework.boot.autoconfigure.logging.ConditionEvaluationReportLoggingListener] - 
 
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled. 
[2022-07-07 16:06:40.361] [ERROR] [SWTraceId - TID: N/A] [] [org.springframework.boot.SpringApplication] - Application run failed 

Reason analyze (If you can)

阿里云官网对其做了解释,是限制了lua脚本的一些调用规则https://help.aliyun.com/document_detail/92942.html?spm=5176.13910061.sslink.1.36426f0dV6cOrU
image
image

image

引用自:https://www.jianshu.com/p/6bd82d96ffcf

已经测试过,可以通过控制台将script_check_enable修改为0解决,但是有没有说要对lua脚本做一些调整 符合它的限制的可能?

只引了cosid-spring-boot-starter报错,还需要再引cosid-spring-redis,文档没有体现

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

spring-boot-starter 命名规范问题

  • 官方命名空间(springboot 官方项目)
    前缀:spring-boot-starter-
    模式:spring-boot-starter-{模块名}
    举例:spring-boot-starter-web、spring-boot-starter-jdbc
  • 自定义命名空间(非springboot 官方)
    后缀:-spring-boot-starter
    模式:{模块}-spring-boot-starter
    举例:mybatis-spring-boot-starter

在使用k8s的HPA做弹性伸缩的时候,可能存在的号段被浪费的问题

Is your feature request related to a problem? Please describe.
第一点,我所在的小庙,为了运维方便,使用k8s来维护生产环境的项目,且使用了HPA做弹性伸缩。
有些项目,存在这样的情况:没有人用的时候,一天都没有几个请求,一旦有人使用到业务高峰的时候,可能会有3k多的并发。庙里配备的运维人员又少,所以类似这样的项目,我们暂时采用k8s的HPA功能做弹性伸缩,事实上也有着很不错的表现。
然后cosid,号段链模式,将会有prefetchWorker线程,预取号段,被预取的号段,将会维护在服务的内存中,不会持久化(如果没有理解错的话)。我看了您的文档,并不怀疑cosid的性能。
但是可能在业务高峰的时候,k8s弹性伸缩,java服务的deployment多出很多个pod,然后这多个实例都预取了号段,且并发越大安全距离会越大,预取号段会越多,正常使用扛过高峰之后,k8s又会弹性伸缩移除掉很多pod。这些被移除的pod预取的号段,可能还没有使用,然后pod就没了,号段就浪费了,id就浪费了。实际存储在库里的id就可能会有较大的不连续。
但是其实我们庙里的业务量根本远远不可能将Long型的id用完,所以这里的号段浪费其实是可以容忍的。

第二点,我所在的小庙有些业务要使用到bitset,之前好多项目使用的是雪花算法变种作为分布式id的解决方案。但是雪花算法对bitset的使用非常不友好,所以我考虑改用号段算法去实现分布式id,然后就查到了cosid这个项目。
在号段被浪费的情况下,bitset的长度可能会受到影响,消耗更多内存。即使我对bitset进行分片,也有可能因为号段中id的浪费,导致分片的bitset中存在大量0,和少量1的情况,造成内存的浪费。请问有没有什么优化的手段?

Describe the solution you'd like
我阅读了cosid的文档,认为可以将号段步长“step”配置的小一点,安全距离"safe-distence"配置小一点,可以尽可能的减少号段和id的浪费。

Describe alternatives you've considered
可不可以有一种新的预取号段的方式。现有cosid的prefetchWorker会自适应的改变安全距离“safe-distence”。
prefetchWorker自适应改变预取周期“prefetch-period”。并发越高,预取周期越短,然后安全距离固定。来实现?

或者将使用号段和使用完毕号段的信息和时间持久化。每隔一段时间将那些比较古老的,没有使用过号段回收。但是这样会打破趋势递增的规则。

Additional context

Performance: CosId vs Leaf

CosId-VS-Leaf

https://cosid.ahoo.me/guide/Performance-CosId-Leaf.html

Describe the solution you'd like

Add benchmark comparison between cosid and leaf. and add to workflows.

https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners#supported-runners-and-hardware-resources


Due to the resource constraints of GitHub runner, the benchmark running in GitHub runner is very different from the benchmark in the real environment (nearly twice),
However, it is still valuable to compare the benchmark before and after commit and the third-party library when running in the same environment and configuring resources (both running in GitHub runner).


因受到 GitHub Runner 资源限制,运行在 GitHub Runner 中的基准测试与真实环境基准测试对比有非常大的差距(近2倍),
但是对于运行在同一环境配置资源情况下(都运行在 GitHub Runner),进行 commit 前后的基准对比、以及第三方库的对比依然是有价值的。

@CosId标记的是触发号段模式吗?

debug的时候配置的是:

cosid:

  namespace: ${spring.application.name}
  snowflake:
    enabled: true
    machine:
      distributor:
        type: redis

默认是share的ID生成器吧 我代码获取的和 @cosid生成的数字不一直 刚刚开始看代码 有点懵逼 望解答
代码是这样获取的

      // 获取的是 315557917682049024    3开头的
      long id = idGeneratorProvider.getShare().generate();

实体打注解的

   // 生成的id为 :6922446069401911327
  @CosId
  @TableId(value = "id", type = IdType.INPUT)
  private Long id;

想请问 手动获取创建的id也想要 6开头的怎么弄 可以吗?

The SpringBoot project built by Gradle dependency the okhttp 4.x version (upgrade version) after the component is config, and an error is reported when compiling

Bug Report

Before report a bug, make sure you have:

Please pay attention on issues you submitted, because we maybe need more details.
If no response anymore and we cannot reproduce it on current information, we will close it.

Please answer these questions before submitting your issue. Thanks!

Which version of CosId did you use?

1.10.2

Expected behavior

Gradle build succeeded

Actual behavior

Gradle build error

> Task :compileJava FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':compileJava'.
> Could not resolve all files for configuration ':compileClasspath'.
   > Could not resolve com.squareup.okhttp3:okhttp:3.14.9.
     Required by:
         project : > me.ahoo.cosid:cosid-core:1.10.2 > me.ahoo.cosid:cosid-dependencies:1.10.2
      > No matching variant of com.squareup.okhttp3:okhttp:4.9.3 was found. The consumer was configured to find an API of a component compatible with Java 11, preferably in the form of class files, preferably optimized for standard JVMs, and its dependencies declared externally, as well as attribute 'org.gradle.category' with value 'platform' but:
          - Variant 'apiElements' capability com.squareup.okhttp3:okhttp:4.9.3 declares an API of a component compatible with Java 8, packaged as a jar, and its dependencies declared externally:
              - Incompatible because this component declares a component, as well as attribute 'org.gradle.category' with value 'library' and the consumer needed a component, as well as attribute 'org.gradle.category' with value 'platform'
              - Other compatible attribute:
                  - Doesn't say anything about its target Java environment (preferred optimized for standard JVMs)
          - Variant 'javadocElements' capability com.squareup.okhttp3:okhttp:4.9.3 declares a runtime of a component, and its dependencies declared externally:
              - Incompatible because this component declares a component, as well as attribute 'org.gradle.category' with value 'documentation' and the consumer needed a component, as well as attribute 'org.gradle.category' with value 'platform'
              - Other compatible attributes:
                  - Doesn't say anything about its target Java environment (preferred optimized for standard JVMs)
                  - Doesn't say anything about its target Java version (required compatibility with Java 11)
                  - Doesn't say anything about its elements (required them preferably in the form of class files)
          - Variant 'runtimeElements' capability com.squareup.okhttp3:okhttp:4.9.3 declares a runtime of a component compatible with Java 8, packaged as a jar, and its dependencies declared externally:
              - Incompatible because this component declares a component, as well as attribute 'org.gradle.category' with value 'library' and the consumer needed a component, as well as attribute 'org.gradle.category' with value 'platform'
              - Other compatible attribute:
                  - Doesn't say anything about its target Java environment (preferred optimized for standard JVMs)
          - Variant 'sourcesElements' capability com.squareup.okhttp3:okhttp:4.9.3 declares a runtime of a component, and its dependencies declared externally:
              - Incompatible because this component declares a component, as well as attribute 'org.gradle.category' with value 'documentation' and the consumer needed a component, as well as attribute 'org.gradle.category' with value 'platform'
              - Other compatible attributes:
                  - Doesn't say anything about its target Java environment (preferred optimized for standard JVMs)
                  - Doesn't say anything about its target Java version (required compatibility with Java 11)
                  - Doesn't say anything about its elements (required them preferably in the form of class files)

* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0.

You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.

See https://docs.gradle.org/7.4.2/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 2s
2 actionable tasks: 1 executed, 1 up-to-date

Reason analyze (If you can)

Steps to reproduce the behavior

execute gradle clean build -x test

Example codes for reproduce this issue (such as a github link).

ext {
    set('springCloudVersion', "2021.0.2")
    set('cosidVersion', '1.10.2')
    set('okhttpVersionn', '4.9.3')
}

dependencies {
    implementation 'org.springframework.boot:spring-boot-starter-web'
    implementation 'org.springframework.boot:spring-boot-starter-jdbc'

    implementation 'me.ahoo.cosid:cosid-core'
    implementation 'me.ahoo.cosid:cosid-jackson'
    implementation 'me.ahoo.cosid:cosid-jdbc'
    implementation 'me.ahoo.cosid:cosid-mybatis'
    implementation 'me.ahoo.cosid:cosid-redis'
    implementation 'me.ahoo.cosid:cosid-shardingsphere'
    implementation 'me.ahoo.cosid:cosid-spring-boot-starter'
    implementation 'me.ahoo.cosid:cosid-spring-redis'
    implementation 'me.ahoo.cosid:cosid-zookeeper'

    implementation "com.squareup.okhttp3:okhttp:${okhttpVersionn}"
}

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.


Warning

Renovate failed to look up the following dependencies: Failed to look up maven package com.sankuai.inf.leaf:leaf-core.

Files affected: cosid-benchmark/build.gradle.kts


Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Ignored or Blocked

These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.

Detected dependencies

dockerfile
cosid-proxy-server/Dockerfile
  • openjdk 21-jdk-slim
github-actions
.github/workflows/benchmark-test.yml
  • actions/setup-java v4
  • actions/setup-java v4
  • actions/setup-java v4
.github/workflows/benchmark-vs-test.yml
  • actions/setup-java v4
.github/workflows/codecov.yml
  • actions/setup-java v4
  • codecov/codecov-action v4
.github/workflows/codeql-analysis.yml
  • actions/checkout v4
  • github/codeql-action v3
  • github/codeql-action v3
  • github/codeql-action v3
.github/workflows/docker-deploy.yml
  • actions/setup-java v4
  • docker/setup-qemu-action v3
  • docker/setup-buildx-action v3
  • docker/login-action v3
  • docker/login-action v3
  • docker/login-action v3
  • docker/metadata-action v5
  • docker/build-push-action v5
.github/workflows/document-deploy.yml
  • actions/setup-java v4
  • actions/setup-node v4
  • crazy-max/ghaction-github-pages v4
  • crazy-max/ghaction-github-pages v4
.github/workflows/example-test.yml
  • actions/setup-java v4
  • actions/setup-java v4
  • actions/setup-java v4
.github/workflows/gitee-sync.yml
.github/workflows/integration-test.yml
  • actions/setup-java v4
  • actions/setup-java v4
  • actions/setup-java v4
  • actions/setup-java v4
  • actions/setup-java v4
  • actions/setup-java v4
  • actions/setup-java v4
  • actions/setup-java v4
  • actions/setup-java v4
  • actions/setup-java v4
  • actions/setup-java v4
  • actions/setup-java v4
  • actions/setup-java v4
.github/workflows/package-deploy.yml
  • actions/setup-java v4
  • actions/setup-java v4
gradle
gradle.properties
settings.gradle.kts
build.gradle.kts
code-coverage-report/build.gradle.kts
cosid-activiti/build.gradle.kts
cosid-axon/build.gradle.kts
cosid-benchmark/gradle.properties
cosid-benchmark/settings.gradle.kts
cosid-benchmark/build.gradle.kts
  • me.champeau.jmh 0.7.2
  • com.zaxxer:HikariCP 5.1.0
  • mysql:mysql-connector-java 8.0.33
  • com.sankuai.inf.leaf:leaf-core 1.0.1
  • org.openjdk.jmh:jmh-core 1.37
  • org.openjdk.jmh:jmh-generator-annprocess 1.37
  • org.junit.jupiter:junit-jupiter-api 5.10.2
  • org.junit.jupiter:junit-jupiter-params 5.10.2
  • org.junit.jupiter:junit-jupiter-engine 5.10.2
  • jmh 1.37
cosid-benchmark/gradle/libs.versions.toml
  • me.ahoo.cosid:cosid-bom 2.6.8
  • com.zaxxer:HikariCP 5.1.0
  • mysql:mysql-connector-java 8.0.33
  • org.openjdk.jmh:jmh-core 1.37
  • org.openjdk.jmh:jmh-generator-annprocess 1.37
  • org.junit.jupiter:junit-jupiter-api 5.10.2
  • org.junit.jupiter:junit-jupiter-params 5.10.2
  • org.junit.jupiter:junit-jupiter-engine 5.10.2
  • me.champeau.jmh 0.7.2
cosid-bom/build.gradle.kts
cosid-core/build.gradle.kts
cosid-dependencies/build.gradle.kts
cosid-flowable/build.gradle.kts
cosid-jackson/build.gradle.kts
cosid-jdbc/build.gradle.kts
cosid-mod-test/build.gradle.kts
  • com.netease.nim:camellia-id-gen-core 1.2.20
  • org.apache.shardingsphere:shardingsphere-sharding-core 5.4.1
cosid-mongo/build.gradle.kts
cosid-mybatis/build.gradle.kts
cosid-proxy/build.gradle.kts
cosid-proxy-server/build.gradle.kts
cosid-spring-boot-starter/build.gradle.kts
cosid-spring-data-jdbc/build.gradle.kts
cosid-spring-redis/build.gradle.kts
cosid-test/build.gradle.kts
cosid-zookeeper/build.gradle.kts
gradle/libs.versions.toml
  • org.springframework.boot:spring-boot-dependencies 3.2.5
  • org.springframework.cloud:spring-cloud-dependencies 2023.0.1
  • com.squareup.okhttp3:okhttp-bom 4.12.0
  • org.axonframework:axon-bom 4.9.4
  • org.testcontainers:testcontainers-bom 1.19.7
  • com.google.guava:guava 33.1.0-jre
  • org.mybatis:mybatis 3.5.15
  • org.mybatis.spring.boot:mybatis-spring-boot-starter 3.0.3
  • org.springdoc:springdoc-openapi-starter-webflux-ui 2.5.0
  • org.activiti:activiti-engine 7.0.0.SR1
  • org.activiti:activiti-spring-boot-starter 7.0.0.SR1
  • org.flowable:flowable-engine-common 7.0.1
  • org.flowable:flowable-spring 7.0.1
  • org.flowable:flowable-spring-boot-autoconfigure 7.0.1
  • org.junit-pioneer:junit-pioneer 2.2.0
  • org.hamcrest:hamcrest 2.2
  • org.openjdk.jmh:jmh-core 1.37
  • org.openjdk.jmh:jmh-generator-annprocess 1.37
  • org.gradle.test-retry 1.5.9
  • io.github.gradle-nexus.publish-plugin 2.0.0
  • me.champeau.jmh 0.7.2
  • com.github.spotbugs 6.0.9
gradle-wrapper
cosid-benchmark/gradle/wrapper/gradle-wrapper.properties
  • gradle 8.7
gradle/wrapper/gradle-wrapper.properties
  • gradle 8.7
npm
document/package.json
  • @vuepress/plugin-back-to-top 1.9.10
  • @vuepress/plugin-google-analytics 1.9.10
  • @vuepress/plugin-medium-zoom 1.9.10
  • @vuepress/plugin-pwa 1.9.10
  • @vuepress/theme-vue 1.9.10
  • vue-toasted 1.1.28
  • vuepress 1.9.10
  • vuepress-plugin-flowchart 1.5.0
documentation/package.json
  • mermaid ^10.6.1
  • vitepress ^1.0.0-rc.33
  • vitepress-plugin-mermaid ^2.0.16

  • Check this box to trigger a request for Renovate to run again on this repository

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.