Giter Site home page Giter Site logo

ianvs's Introduction

KubeEdge

Go Report Card LICENSE Releases CII Best Practices

English | 简体中文

KubeEdge is built upon Kubernetes and extends native containerized application orchestration and device management to hosts at the Edge. It consists of cloud part and edge part, provides core infrastructure support for networking, application deployment and metadata synchronization between cloud and edge. It also supports MQTT which enables edge devices to access through edge nodes.

With KubeEdge it is easy to get and deploy existing complicated machine learning, image recognition, event processing and other high level applications to the Edge. With business logic running at the Edge, much larger volumes of data can be secured & processed locally where the data is produced. With data processed at the Edge, the responsiveness is increased dramatically and data privacy is protected.

KubeEdge is an incubation-level hosted project by the Cloud Native Computing Foundation (CNCF). KubeEdge incubation announcement by CNCF.

Advantages

  • Kubernetes-native support: Managing edge applications and edge devices in the cloud with fully compatible Kubernetes APIs.
  • Cloud-Edge Reliable Collaboration: Ensure reliable messages delivery without loss over unstable cloud-edge network.
  • Edge Autonomy: Ensure edge nodes run autonomously and the applications in edge run normally, when the cloud-edge network is unstable or edge is offline and restarted.
  • Edge Devices Management: Managing edge devices through Kubernetes native APIs implemented by CRD.
  • Extremely Lightweight Edge Agent: Extremely lightweight Edge Agent(EdgeCore) to run on resource constrained edge.

How It Works

KubeEdge consists of cloud part and edge part.

Architecture

In the Cloud

  • CloudHub: a web socket server responsible for watching changes at the cloud side, caching and sending messages to EdgeHub.
  • EdgeController: an extended kubernetes controller which manages edge nodes and pods metadata so that the data can be targeted to a specific edge node.
  • DeviceController: an extended kubernetes controller which manages devices so that the device metadata/status data can be synced between edge and cloud.

On the Edge

  • EdgeHub: a web socket client responsible for interacting with Cloud Service for the edge computing (like Edge Controller as in the KubeEdge Architecture). This includes syncing cloud-side resource updates to the edge, and reporting edge-side host and device status changes to the cloud.
  • Edged: an agent that runs on edge nodes and manages containerized applications.
  • EventBus: a MQTT client to interact with MQTT servers (mosquitto), offering publish and subscribe capabilities to other components.
  • ServiceBus: an HTTP client to interact with HTTP servers (REST), offering HTTP client capabilities to components of cloud to reach HTTP servers running at edge.
  • DeviceTwin: responsible for storing device status and syncing device status to the cloud. It also provides query interfaces for applications.
  • MetaManager: the message processor between edged and edgehub. It is also responsible for storing/retrieving metadata to/from a lightweight database (SQLite).

Kubernetes compatibility

Kubernetes 1.20 Kubernetes 1.21 Kubernetes 1.22 Kubernetes 1.23 Kubernetes 1.24 Kubernetes 1.25 Kubernetes 1.26 Kubernetes 1.27 Kubernetes 1.28 Kubernetes 1.29
KubeEdge 1.12 - - - - - - -
KubeEdge 1.13 + - - - - - -
KubeEdge 1.14 + + - - - - -
KubeEdge 1.15 + + + + - - -
KubeEdge 1.16 + + + + + - -
KubeEdge 1.17 + + + + + + -
KubeEdge HEAD (master) + + + + + + +

Key:

  • KubeEdge and the Kubernetes version are exactly compatible.
  • + KubeEdge has features or API objects that may not be present in the Kubernetes version.
  • - The Kubernetes version has features or API objects that KubeEdge can't use.

Guides

Get start with this doc.

See our documentation on kubeedge.io for more details.

To learn deeply about KubeEdge, try some examples on examples.

Roadmap

Meeting

Regular Community Meeting:

Resources:

Contact

If you need support, start with the troubleshooting guide, and work your way through the process that we've outlined.

If you have questions, feel free to reach out to us in the following ways:

Contributing

If you're interested in being a contributor and want to get involved in developing the KubeEdge code, please see CONTRIBUTING for details on submitting patches and the contribution workflow.

Security

Security Audit

A third party security audit of KubeEdge has been completed in July 2022. Additionally, the KubeEdge community completed an overall system security analysis of KubeEdge. The detailed reports are as follows.

Reporting security vulnerabilities

We encourage security researchers, industry organizations and users to proactively report suspected vulnerabilities to our security team ([email protected]), the team will help diagnose the severity of the issue and determine how to address the issue as soon as possible.

For further details please see Security Policy for our security process and how to report vulnerabilities.

License

KubeEdge is under the Apache 2.0 license. See the LICENSE file for details.

ianvs's People

Contributors

back1860 avatar frank-lilinjie avatar hsj576 avatar iszhyang avatar jaypume avatar kevin-wangzefeng avatar luosiqi avatar moorezheng avatar nailtu30 avatar qxygxt avatar sai-suraj-27 avatar shifan-z avatar winter-fish avatar yqhok1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

ianvs's Issues

question about semantic-segmentation environment

I am following semantic-segmentation README, when I running ianvs -f examples/robot/lifelong_learning_bench/semantic-segmentation/benchmarkingjob-simple.yaml, it shows:

Traceback (most recent call last):
  File "/home/icyfeather/miniconda3/envs/ianvs/bin/ianvs", line 33, in <module>
    sys.exit(load_entry_point('ianvs==0.1.0', 'console_scripts', 'ianvs')())
  File "/home/icyfeather/project/ianvs/core/cmd/benchmarking.py", line 41, in main
    raise RuntimeError(f"benchmarkingjob runs failed, error: {err}.") from err
RuntimeError: benchmarkingjob runs failed, error: testcase(id=6632e63a-19f0-11ef-8dca-8576dbea9f3c) runs failed, error: (paradigm=lifelonglearning) pipeline runs failed, error: module(type=basemodel loads class(name=BaseModel) failed, error: load module(url=./examples/robot/lifelong_learning_bench/semantic-segmentation/testalgorithms/rfnet/basemodel-simple.py) failed, error: libcudart.so.11.0: cannot open shared object file: No such file or directory..

After searching on the Internet, I know it's probably about version conficts. But I hope there is a detailed version requirements(such as cuda version, torch version, etc.) to help me solve this.

And here is my env info:

(ianvs) icyfeather@gpu3:~/project/ianvs$ pip list
Package                  Version     Editable project location
------------------------ ----------- -----------------------------------------
absl-py                  2.1.0
addict                   2.4.0
asgiref                  3.8.1
astor                    0.8.1
cachetools               4.2.4
certifi                  2024.2.2
charset-normalizer       3.3.2
click                    8.1.7
colorlog                 4.7.2
contourpy                1.2.1
cycler                   0.12.1
fastapi                  0.68.2
filelock                 3.14.0
fonttools                4.51.0
fsspec                   2024.5.0
gast                     0.5.4
google-auth              1.35.0
google-auth-oauthlib     0.4.6
google-pasta             0.2.0
grpcio                   1.64.0
h11                      0.14.0
h5py                     3.11.0
ianvs                    0.1.0
idna                     3.7
importlib_metadata       7.1.0
importlib_resources      6.4.0
install                  1.3.5
Jinja2                   3.1.4
joblib                   1.2.0
Keras-Applications       1.0.8
Keras-Preprocessing      1.1.2
kiwisolver               1.4.5
Markdown                 3.6
markdown-it-py           3.0.0
MarkupSafe               2.1.5
matplotlib               3.9.0
mdurl                    0.1.2
minio                    7.0.4
mmcv                     2.0.0
mmdet                    3.1.0       /home/icyfeather/project/mmdetection
mmengine                 0.10.4
mpmath                   1.3.0
mypath                   0.1
networkx                 3.2.1
numpy                    1.23.4
nvidia-cublas-cu12       12.1.3.1
nvidia-cuda-cupti-cu12   12.1.105
nvidia-cuda-nvrtc-cu12   12.1.105
nvidia-cuda-runtime-cu12 12.1.105
nvidia-cudnn-cu12        8.9.2.26
nvidia-cufft-cu12        11.0.2.54
nvidia-curand-cu12       10.3.2.106
nvidia-cusolver-cu12     11.4.5.107
nvidia-cusparse-cu12     12.1.0.106
nvidia-nccl-cu12         2.20.5
nvidia-nvjitlink-cu12    12.5.40
nvidia-nvtx-cu12         12.1.105
oauthlib                 3.2.2
opencv-python            4.9.0.80
packaging                24.0
pandas                   2.2.2
pillow                   10.3.0
pip                      24.0
platformdirs             4.2.2
prettytable              2.5.0
protobuf                 3.20.3
pyasn1                   0.6.0
pyasn1_modules           0.4.0
pycocotools              2.0.7
pydantic                 1.10.15
Pygments                 2.18.0
pyparsing                3.1.2
python-dateutil          2.9.0.post0
pytz                     2024.1
PyYAML                   6.0.1
requests                 2.32.2
requests-oauthlib        2.0.0
rich                     13.7.1
rsa                      4.9
scikit-learn             1.5.0
scipy                    1.13.1
sedna                    0.4.1
segment-anything         1.0         /home/icyfeather/project/segment-anything
setuptools               54.2.0
shapely                  2.0.4
six                      1.15.0
starlette                0.14.2
sympy                    1.12
tenacity                 8.0.1
tensorboard              2.3.0
tensorboard-plugin-wit   1.8.1
tensorflow-estimator     1.14.0
termcolor                2.4.0
terminaltables           3.1.10
threadpoolctl            3.5.0
tomli                    2.0.1
torch                    2.3.0
torchaudio               2.3.0
torchvision              0.18.0
tqdm                     4.66.4
triton                   2.3.0
typing_extensions        4.11.0
tzdata                   2024.1
urllib3                  2.2.1
uvicorn                  0.14.0
wcwidth                  0.2.13
websockets               9.1
Werkzeug                 3.0.3
wheel                    0.43.0
wrapt                    1.16.0
yapf                     0.40.2
zipp                     3.18.2
(ianvs) icyfeather@gpu3:~/project/ianvs$ nvidia-smi 
Sat May 25 01:11:54 2024       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 555.42.02              Driver Version: 555.42.02      CUDA Version: 12.5     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 2080 Ti     Off |   00000000:02:00.0 Off |                  N/A |
|  0%   42C    P8              1W /  260W |      16MiB /  11264MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A      1082      G   /usr/lib/xorg/Xorg                              9MiB |
|    0   N/A  N/A      1397      G   /usr/bin/gnome-shell                            3MiB |
+-----------------------------------------------------------------------------------------+

Should I downgrade my cuda version?

Automatic Image Annotation Algorithm based on Sedna Lifelong Learning to reproduce open world object Segmentation

Introduction or background of this discussion:

Contents of this discussion:
Supported language: Chinese /English
Technology: AI
Programming language: Python
Project Description:
In traditional machine learning, a model of known category is trained to detect objects within the range of known category. However, for unknown category samples are not capable of recognition, the model will assign unknown category samples to known category.
Therefore, open world object segmentation is the main research direction of artificial intelligence. This project aims to replicate the paper "Segment Anything" published in 23 years, and try to apply the algorithm to open domain data automatic annotation. This paper proposes a data engine tool for open world object segmentation, which can realize fast annotation of data and achieve closed loop of data.
Project output requirements:

  1. Code implementation of segmented data based on open set semantics(such as StreetHazards, Lost and Found, Road Anomaly and other data sets)
  2. Integrate the reproduced open-world segmentation algorithm into the Sedna lifelong learning module
  3. The accuracy rate of open world object segmentation (e.g. AP,miou) is greater than 0.45
    Project technical requirements:
  4. Deep learning
    2.Python
    Project results warehouse:
    https://github.com/kubeedge/sedna
    Tutor's name: Su Jingyong Email: [email protected]
    References:
    https://arxiv.org/abs/2304.02643

Not supported parallel processing of multiple use cases yet

What would you like to be added/modified:
It seems that Ianvs has not supported parallelly testing several groups of parameters.

Why is this needed:
Each use case spends the most of the time on training process. When a user wants to test several groups of parameters, serial training will incur unbearable time overhead.

Heterogeneous Multiedge Inference for High Mobility Scenarios

What would you like to be added/modified:
Based on the current multiedge inference benchmark on ianvs, we would like to extend the multiedge inference on multiple heterogeneous edges (e.g., mobile phones, smart watches, laptops) for reducing the inference latency of a large DNN model on high mobility scenarios, where the connection between cloud and edge is unreliable. To achieve this goal, it includes:

  1. bulid a benchmark for multiple edge inference in ianvs;
  2. implement some basic algorithm for DNN partitioning on multiple ends;
  3. (Optional) develop a baseline algorithm for this benchmark;

Why is this needed:
In recent years, artificial intelligence models represented by LLM have put forward extremely high requirements for computing power. However, in high mobility scenarios, the connection between edges and clouds is unstable, making it difficult to ensure the quality of service. This results in extremely poor user experience for applications such as large models in high mobility scenarios.

However, in fact, computing power on the edge is not weak either. At present, more and more mobile phones, tablets, laptops, etc. are equipped with AI chips, allowing them to run neural networks locally with a high latency. We therefore propose whether it is possible to utilize the computing power of multiple edge devices in one person's hands to reduce model inference latency and ensure service quality.

KubeEdge provides excellent collaborative foundational capabilities and provides examples of multilateral collaboration. Therefore, we plan to extend the multilateral collaboration to multiple heterogeneous edges based on this example.

Recommended Skills:
Python, KubeEdge-Ianvs

Useful link:
https://github.com/kubeedge/ianvs/tree/main/examples/MOT17/multiedge_inference_bench/pedestrian_tracking

Couldn't run example in scene-based-unknown-task-recognition

What happened:
I followed every instructions in the Quick Start of scene-based-unknown-task-recognition, I am pretty sure that I installed the feature-lifelong-n branch of Ianvs successfully, but I still couldn't run the example.

The log information is as follow:

un_classes:24
Upsample layer: in = 128, skip = 64, out = 128
Upsample layer: in = 128, skip = 128, out = 128
Upsample layer: in = 128, skip = 256, out = 128
128
Model loaded successfully!
Traceback (most recent call last):
File "/ianvs/project/ianvs/core/testcasecontroller/testcase/testcase.py", line 74, in run
res, system_metric_info = paradigm.run()
File "/ianvs/project/ianvs/core/testcasecontroller/algorithm/paradigm/lifelong_learning/lifelong_learning.py", line 91, in run
self.cloud_task_index = self._train(self.cloud_task_index,
File "/ianvs/project/ianvs/core/testcasecontroller/algorithm/paradigm/lifelong_learning/lifelong_learning.py", line 185, in _train
job = self.build_paradigm_job(ParadigmType.LIFELONG_LEARNING.value)
File "/ianvs/project/ianvs/core/testcasecontroller/algorithm/paradigm/base.py", line 103, in build_paradigm_job
return LifelongLearning(
File "/root/miniconda3/lib/python3.8/site-packages/sedna/core/lifelong_learning/lifelong_learning.py", line 146, in init
self.unseen_sample_recognition.get("param", {}))
AttributeError: 'UnseenSampleRecognitionByScene' object has no attribute 'get'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/ianvs/project/ianvs/core/testcasecontroller/testcasecontroller.py", line 54, in run_testcases
res, time = (testcase.run(workspace), utils.get_local_time())
File "/ianvs/project/ianvs/core/testcasecontroller/testcase/testcase.py", line 79, in run
raise Exception(
Exception: (paradigm=lifelonglearning) pipeline runs failed, error: 'UnseenSampleRecognitionByScene' object has no attribute 'get'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/ianvs/project/ianvs/core/cmd/benchmarking.py", line 37, in main
job.run()
File "/ianvs/project/ianvs/core/cmd/obj/benchmarkingjob.py", line 88, in run
succeed_testcases, test_results = self.testcase_controller.run_testcases(self.workspace)
File "/ianvs/project/ianvs/core/testcasecontroller/testcasecontroller.py", line 56, in run_testcases
raise Exception(f"testcase(id={testcase.id}) runs failed, error: {err}") from err
Exception: testcase(id=8fe90ddc-b903-11ed-bbd3-02420a00300a) runs failed, error: (paradigm=lifelonglearning) pipeline runs failed, error: 'UnseenSampleRecognitionByScene' object has no attribute 'get'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/root/miniconda3/bin/ianvs", line 33, in
sys.exit(load_entry_point('ianvs==0.1.0', 'console_scripts', 'ianvs')())
File "/ianvs/project/ianvs/core/cmd/benchmarking.py", line 41, in main
raise Exception(f"benchmarkingjob runs failed, error: {err}.") from err
Exception: benchmarkingjob runs failed, error: testcase(id=8fe90ddc-b903-11ed-bbd3-02420a00300a) runs failed, error: (paradigm=lifelonglearning) pipeline runs failed, error: 'UnseenSampleRecognitionByScene' object has no attribute 'get'.

Anything else we need to know?:
The environment is Ubuntu18.04, python3.8.5

LFX Mentorship 2023 01-Mar-May Challenge - for #48

Introduction

For those who want to apply for LFX mentorship for #48, this is a selection test for the application. This LFX mentorship aims to build lifelong learning benchmarking on KubeEdge-Ianvs which is a distributed synergy AI benchmarking platform. Based on Ianvs, we designed this challenge to evaluate the candidates.

Requirements

Each applicant of LFX Mentorship can try out the following two tasks and gain a total accumulated score according to completeness. In the end, we'll publish the top five applicants and their total scores. Finally, the one with the highest score will successfully become the mentee of this LFX Mentorship project. All the titles of task output such as pull requests (PRs) should be prefixed with LFX Mentorship.

Task 1

Content

  1. Build a public dataset benchmarking website to present the example dataset cityscapes.
    • The applicant might want to design the website with a style similar to the example website of coda.
    • In this task, to release the task burden of the applicant, we provide a clean and re-organized dataset based on the existing public CITYSCAPES merely for election purposes. Note that another much more complicated new dataset will be provided to the mentee after the mentorship starts.
  2. This benchmarking website should exhibit the contents listed in Table 1.
  3. Submit a PR that includes a public link of the dataset benchmarking website and the corresponding dataset introduction.
    • We suggest that the domain name of the website be named after a personal account (e.g., jack123-LFX.github.io for applicant Jack123).
Home page Dataset overview
Lifelong learning algorithm overview
Data sample display
Documentation page Dataset partition description
Data statistics
Data format
Data annotation
Download page Instructions and links
Benchmark page Various algorithm and metric results

Table 1. Task 1 overview

Resources

Task 2

Content

  1. Create a new example on Kubeedge Ianvs based on the semantic segmentation dataset cityscapes, for single task learning, incremental learning or lifelong learning.
    • The example mainly includes a new baseline algorithm (not existed on Ianvs) which can run on Ianvs under cityscapes.
    • The baseline algorithm can be a new unseen task detection algorithm, processing algorithm, or base model.
    • Reviewers will take a look at the Mean Intersection over Union (mIoU) to evaluate the algorithm performance.
    • Note that If the applicant wants to use single-task learning or incremental learning, s/he needs to replace the origin object-detection dataset pcb-aoi to the targeted semantic-segmentation dataset cityscapes, and replace the origin object-detection model FPN to a semantic-segmentation model, e.g., RFNet. While for an applicant who tries to tackle lifelong learning, s/he does not necessarily need to do that, because the dataset and base model are both prepared.
  2. For each algorithm paradigm, submit an experiment report as a PR which includes algorithm design, experiment results and a README document.
  3. Submit a PR of codes of this new example.
    • The organization of the codes can be referred to pcb-aoi.

Resources

Rating

Task 1

All the items that should be completed in task 1 are listed in Table 2 and item scores will be accumulated as the total score of this task.

Item Score
Set up a basic frontend framework 10
The frontend pages can be accessed publicly 10
Home page Dataset overview 5
Lifelong learning algorithm overview 5
Data sample display 5
Documentation page Dataset partition description 5
Data statistics 5
Data format 5
Data annotation 5
Download page Instructions and links 5
Benchmark page Various algorithm and metric results 20

Table 2. Task 1 scoring rules

Task 2

  1. Completion of different algorithm paradigms has different scores as shown in Table 3.
  2. For examples under the same algorithm paradigm, an applicant will obtain 20 extra scores only if his/her example performs the best in ranking. When ranking, reviewers will take a look at the Mean Intersection over Union (mIoU) to evaluate the algorithm performance.
    • That is, only the applicant ranking top 1 gets the extra score. Good Luck!
  3. Each applicant can try to implement multiple examples with different algorithm paradigms. But only the algorithm paradigm with the highest score will be counted.
  4. For the examples that can not be run successfully directly through the submitted code and the README instruction document, the total score will be 0 in task 2. So, be cautious about the code and docs!
Item Score
Lifelong learning 50
Incremental learning 30
Single task learning 10
Highest metric result 20

Table 3. Task 2 scoring rules

Deadline

According to the timeline of LFX mentorship 2023 01-Mar-May, the admission decision deadline is March 7th. Since we have to process the internal review and decide, the final date for PR submissions of the pretest will be March 5th, 8:00 AM PDT.

Link model codes url to ianvs directly instead of packaging models as third-party wheels

What would you like to be added/modified:
I wonder if it's possible for ianvs to mount local model codes url (e.g., local directory) right into algorithm.yaml (or other config files) instead of packaging models as third-party wheels.

Why is this needed:
Packaging local model codes might be tedious or even difficult for algorithm developers. More efficient and lightweight develop methods are preferred.

following quickstart guide and meet No matching distribution found for tensorflow~=1.14.0

What happened:

I am following QuickStart Guide to install the env, when I run this as the guide reads:

cd /ianvs/project/ianvs/
python -m pip install examples/resources/algorithms/FPN_TensorFlow-0.1-py3-none-any.whl

it shows:

(ianvs) icyfeather@gpu3:~/project/ianvs$ pip install examples/resources/algorithms/FPN_TensorFlow-0.1-py3-none-any.whl
Processing ./examples/resources/algorithms/FPN_TensorFlow-0.1-py3-none-any.whl
Collecting wheel~=0.36.2 (from FPN-TensorFlow==0.1)
  Using cached wheel-0.36.2-py2.py3-none-any.whl.metadata (2.3 kB)
Collecting libs~=0.0.10 (from FPN-TensorFlow==0.1)
  Using cached libs-0.0.10-py3-none-any.whl.metadata (831 bytes)
INFO: pip is looking at multiple versions of fpn-tensorflow to determine which version is compatible with other requirements. This could take a while.
ERROR: Could not find a version that satisfies the requirement tensorflow~=1.14.0 (from fpn-tensorflow) (from versions: 2.5.0, 2.5.1, 2.5.2, 2.5.3, 2.6.0rc0, 2.6.0rc1, 2.6.0rc2, 2.6.0, 2.6.1, 2.6.2, 2.6.3, 2.6.4, 2.6.5, 2.7.0rc0, 2.7.0rc1, 2.7.0, 2.7.1, 2.7.2, 2.7.3, 2.7.4, 2.8.0rc0, 2.8.0rc1, 2.8.0, 2.8.1, 2.8.2, 2.8.3, 2.8.4, 2.9.0rc0, 2.9.0rc1, 2.9.0rc2, 2.9.0, 2.9.1, 2.9.2, 2.9.3, 2.10.0rc0, 2.10.0rc1, 2.10.0rc2, 2.10.0rc3, 2.10.0, 2.10.1, 2.11.0rc0, 2.11.0rc1, 2.11.0rc2, 2.11.0, 2.11.1, 2.12.0rc0, 2.12.0rc1, 2.12.0, 2.12.1, 2.13.0rc0, 2.13.0rc1, 2.13.0rc2, 2.13.0, 2.13.1, 2.14.0rc0, 2.14.0rc1, 2.14.0, 2.14.1, 2.15.0rc0, 2.15.0rc1, 2.15.0, 2.15.0.post1, 2.15.1, 2.16.0rc0, 2.16.1)
ERROR: No matching distribution found for tensorflow~=1.14.0

How to reproduce it (as minimally and precisely as possible):

conda create -n ianvs python=3.9
conda activate ianvs
python -m pip install ./examples/resources/third_party/*
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
python setup.py install
python -m pip install examples/resources/algorithms/FPN_TensorFlow-0.1-py3-none-any.whl

Anything else we need to know?:

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.6 LTS"
python=3.9

Large Language Model Edge Benchmark Suite: Implementation on KubeEdge-Ianvs

What would you like to be added/modified:
A benchmark suite for large language models deployed at the edge using KubeEdge-Ianvs:

  1. Interface Design and Usage Guidelines Document;
  2. Implementation of NLP Large Language Models (LLMs) Benchmark Suite Based on Ianvs
    2.1 Extensive support for mainstream industry benchmark dataset formats such as MMLU, CMMLU, and other open-source datasets.
    2.2 Visualization of the LLMs invocation process, including console output, logging of task execution and monitoring, etc.
  3. Generation of Benchmark Testing Reports Based on Ianvs
    3.1 Test at least three types of LLMs.
    3.2 Present computation results of performance metrics such as ACC, Recall, F1, latency, bandwidth, etc., with metric dimensions referencing the national standard "Artificial Intelligence - Pretrained Models Part 2: Evaluation Metrics and Methods".
  4. (Advanced) Efficient Evaluation: Concurrent execution of tasks, automatic request and result collection.
  5. (Advanced) Integration of Benchmark Testing Suite into the LLMs Training Process.

Why is this needed:
Due to the size of models and data, Large Language Models (LLMs) are often trained in the cloud. Simultaneously, due to concerns regarding commercial confidentiality or user privacy during the usage of LLMs, deploying LLMs on edge devices has gradually become a research hotspot. Quantization techniques for LLMs are enabling edge-side inference; however, the limited resources of edge devices have an impact on the inference latency and accuracy compared to cloud-based training of LLMs. Ianvs aims to conduct edge-side deployment benchmark tests for cloud-trained LLMs utilizing container resource management capabilities and edge-cloud synergy abilities.

Recommended Skills:
TensorFlow/Pytorch, LLMs, Docker

Useful links:
KubeEdge-Ianvs
KubeEdge-Ianvs Benchmark Test Cases
Building Edge-Cloud Synergy Simulation Environment with KubeEdge-Ianvs
Artificial Intelligence - Pretrained Models Part 2: Evaluation Metrics and Methods
Example LLMs Benchmark List
Docker Resource Management

Implementation of a Class Incremental Learning Algorithm Evaluation System based on Ianvs

Introduction or background of this discussion:
OSPP project: "Implementation of a Class Incremental Learning Algorithm Evaluation System based on Ianvs"

Contents of this discussion:

  1. Using the specified data set to reproduce the lifelong learning semantic segmentation algorithm on KubeEdge-Ianvs; The data set includes cityscapes, SYNTHIA, and KubeEdge SIG AI open-source cloud robot data set.
  2. Displaying the algorithm test report (including ranking, time, algorithm name, data set name and distribution type, test indicators, etc.) on KubeEdge-Ianvs;

Project Description:

Driving is a skill that humans do not forget in natural situations. They can easily drive in multiple geographic locations. This suggests that humans are inherently endowed with a lifelong learning capacity,

with little forgetting of previously learned visual patterns when faced with domain shifts or new objects to be recognized. In the same way, many researchers hope that the semantic segmentation model also has

the ability to use a common joint model to learn data sets of multiple scenes in sequence. It is expected that the model can gradually learn new scene domains while maintaining the performance of the old

domain, and Datasets from the old domain are not accessed.

This project aims to reproduce the WACV2022 paper Multi-Domain Incremental Learning for Semantic Segmentation lifelong learning semantic segmentation algorithm, acting on the dataset of robot vision scenes, and serving as the baseline algorithm of KubeEdge-Ianvs for developers to use. The data set includes cityscapes, SYNTHIA, and KubeEdge SIG AI open-source cloud robot data set.

LFX Mentorship 2023 01-Mar-May Challenge - for #48

Introduction

For those who want to apply for LFX mentorship for #48, this is a selection test for the application. This LFX mentorship aims to build lifelong learning benchmarking on KubeEdge-Ianvs which is a distributed synergy AI benchmarking platform. Based on Ianvs, we designed this challenge to evaluate the candidates.

Requirements

Each applicant of LFX Mentorship can try out the following two tasks and gain a total accumulated score according to completeness. In the end, we'll publish the top five applicants and their total scores. Finally, the one with the highest score will successfully become the mentee of this LFX Mentorship project. All the titles of task output such as pull requests (PRs) should be prefixed with LFX Mentorship.

Task 1

Content

  1. Build a public dataset benchmarking website to present the example dataset [cityscapes][https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/robo_dog_delivery/cityscapes.zip].
    • The applicant might want to design the website with a style similar to the example website of [coda][https://coda-dataset.github.io/index.html].
    • In this task, to release the task burden of the applicant, we provide a clean and re-organized dataset based on the existing public [CITYSCAPES][https://www.cityscapes-dataset.com/] merely for election purposes. Note that another much more complicated new dataset will be provided to the mentee after the mentorship starts.
  2. This benchmarking website should exhibit the contents listed in Table 1.
  3. Submit a PR that includes a public link of the dataset benchmarking website and the corresponding dataset introduction.
    • We suggest that the domain name of the website be named after a personal account (e.g., jack123-LFX.github.io for applicant Jack123).
Home page Dataset overview
Lifelong learning algorithm overview
Data sample display
Documentation page Dataset partition description
Data statistics
Data format
Data annotation
Download page Instructions and links
Benchmark page Various algorithm and metric results

Table 1. Task 1 overview

Resources

Task 2

Content

  1. Create a new example on Kubeedge Ianvs based on the semantic segmentation dataset [cityscapes][https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/robo_dog_delivery/cityscapes.zip], for [single task learning][https://ianvs.readthedocs.io/en/latest/proposals/algorithms/single-task-learning/fpn.html], [incremental learning][https://ianvs.readthedocs.io/en/latest/proposals/algorithms/incremental-learning/basicIL-fpn.html] or [lifelong learning][https://github.com/kubeedge/ianvs/tree/feature-lifelong-n/examples/scene-based-unknown-task-recognition/lifelong_learning_bench].
    • The example mainly includes a new baseline algorithm (not existed on Ianvs) which can run on Ianvs under [cityscapes][https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/robo_dog_delivery/cityscapes.zip].
    • The baseline algorithm can be a new unseen task detection algorithm, processing algorithm, or base model.
    • Reviewers will take a look at the Mean Intersection over Union (mIoU) to evaluate the algorithm performance.
    • Note that If the applicant wants to use [single-task learning][https://ianvs.readthedocs.io/en/latest/proposals/algorithms/single-task-learning/fpn.html] or [incremental learning][https://ianvs.readthedocs.io/en/latest/proposals/algorithms/incremental-learning/basicIL-fpn.html], s/he needs to replace the origin object-detection dataset [pcb-aoi][https://github.com/kubeedge/ianvs/tree/main/examples/pcb-aoi] to the targeted semantic-segmentation dataset [cityscapes][https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/robo_dog_delivery/cityscapes.zip], and replace the origin object-detection model FPN to a semantic-segmentation model, e.g., RFNet. While for an applicant who tries to tackle [lifelong learning][https://github.com/kubeedge/ianvs/tree/feature-lifelong-n/examples/scene-based-unknown-task-recognition/lifelong_learning_bench], s/he does not necessarily need to do that, because the dataset and base model are both prepared.
  2. For each algorithm paradigm, submit an experiment report as a PR which includes algorithm design, experiment results and a README document.
    • The README document aims to show instructions for testing and verifying the submitted example for reviewers. An example is available at [unseen task recognition readme document][https://github.com/kubeedge/ianvs/blob/feature-lifelong-n/examples/scene-based-unknown-task-recognition/lifelong_learning_bench/Readme.md].
    • An example of the algorithm design is available at [unseen task recognition proposal][https://github.com/kubeedge/ianvs/blob/main/docs/proposals/algorithms/lifelong-learning/Unknown_Task_Recognition_Algorithm_Reproduction_based_on_Lifelong_Learning_of_Ianvs.md#5-design-details].
    • An example of the experiment results is available at [the leaderboard of single task learning][https://ianvs.readthedocs.io/en/latest/leaderboards/leaderboard-in-industrial-defect-detection-of-PCB-AoI/leaderboard-of-single-task-learning.html].
  3. Submit a PR of codes of this new example.
    • The organization of the codes can be referred to [pcb-aoi][https://github.com/kubeedge/ianvs/tree/main/examples/pcb-aoi].

Resources

Rating

Task 1

All the items that should be completed in task 1 are listed in Table 2 and item scores will be accumulated as the total score of this task.

Item Score
Set up a basic frontend framework 10
The frontend pages can be accessed publicly 10
Home page Dataset overview 5
Lifelong learning algorithm overview 5
Data sample display 5
Documentation page Dataset partition description 5
Data statistics 5
Data format 5
Data annotation 5
Download page Instructions and links 5
Benchmark page Various algorithm and metric results 20

Table 2. Task 1 scoring rules

Task 2

  1. Completion of different algorithm paradigms has different scores as shown in Table 3.
  2. For examples under the same algorithm paradigm, an applicant will obtain 20 extra scores only if his/her example performs the best in ranking. When ranking, reviewers will take a look at the Mean Intersection over Union (mIoU) to evaluate the algorithm performance.
    • That is, only the applicant ranking top 1 gets the extra score. Good Luck!
  3. Each applicant can try to implement multiple examples with different algorithm paradigms. But only the algorithm paradigm with the highest score will be counted.
  4. For the examples that can not be run successfully directly through the submitted code and the README instruction document, the total score will be 0 in task 2. So, be cautious about the code and docs!
Item Score
Lifelong learning 50
Incremental learning 30
Single task learning 10
Highest metric result 20

Table 3. Task 2 scoring rules

Deadline

According to [the timeline of LFX mentorship 2023 01-Mar-May](https://github.com/cncf/mentoring/tree/main/lfx-mentorship/2023/01-Mar-May), the admission decision deadline is March 7th. Since we have to process the internal review and decide, the final date for PR submissions of the pretest will be March 5th, 8:00 AM PDT.

Cloud-edge collaborative inference for LLM based on KubeEdge-Ianvs

What would you like to be added/modified:
This issue aims to build a cloud-edge collaborative inference framework for LLM on KubeEdge-Ianvs. Namely, it aims to help all cloud-edge LLM developers improve inference accuracy with strong privacy and fast inference speed. This issue includes:

  1. Implement a benchmark of LLM tasks (e.g. basic LLM tasks such as user question answering, code generation, or text translation) in KubeEdge-Ianvs.
  2. An example of LLM cloud-edge collaborative inference implemented in KubeEdge-Ianvs.
  3. (advance) Implement cloud-edge collaborative algorithms for LLM, such as Speculative decoding, etc. .

Why is this needed:
At present, LLM models with the scale of 10 billion and 100 billion parameters, led by Llama2-70b and Qwen-72b, can only be deployed in the cloud with sufficient computing power to provide inference services. However, for users of edge terminals, on the one hand, cloud LLM services face the problem of slow inference speed and long response delay; on the other hand, uploading edge private data to the cloud for processing may face the risk of privacy disclosure. At the same time, the inference accuracy of LLM models that can be deployed in edge environments (such as TinyLlama-1.1b) is much lower than that of cloud LLM. Therefore, using cloud LLM or edge LLM alone cannot simultaneously take into account privacy protection, real-time inference and inference accuracy. Therefore, we need to combine the advantages of high inference accuracy of cloud LLM with strong privacy and fast inference of edge LLM through the strategy of cloud edge collaboration, so as to better meet the needs of edge users.

Recommended Skills:
KubeEdge-Ianvs, Python, Pytorch, LLMs

Useful links:
Introduction to Ianvs
Unleashing the Power of Edge-Cloud Generative AI in Mobile Networks: A Survey of AIGC Services
Hybrid LLM: Cost-Efficient and Quality-Aware Query Routing

Smart Coding benchmark suite: built on KubeEdge-lanvs

What would you like to be added/modified:

  1. Build a collaborative code intelligent agent alignment dataset for LLMs:
    • The dataset should include behavioral trajectories, feedback, and iterative processes of software engineers during development, as well as relevant code versions and annotation information.
    • The dataset should cover code scenarios of different programming languages, business domains, and complexities.
    • The dataset should comply with privacy protection and intellectual property requirements, providing good accessibility and usability.
  2. Design a code intelligent agent collaborative evaluation benchmark for LLMs:
    • The evaluation benchmark should include common tasks of code intelligent agents such as code generation, recommendation, and analysis.
    • Evaluation metrics should cover multiple dimensions including functionality, reliability, interpretability, etc., matching the feedback and requirements of software engineers.
    • The evaluation benchmark should assess the performance of LLMs in collaborative code intelligent agent tasks and provide a basis for further algorithm optimization.
  3. Integrate the dataset and evaluation benchmark into the KubeEdge-Ianvs framework:
    • Incorporate the dataset and evaluation benchmark as part of the Ianvs framework, providing good scalability and integrability.
    • Ensure that the dataset and evaluation benchmark can efficiently run on edge devices within the Ianvs framework and seamlessly collaborate with other functional modules of Ianvs.
    • Release an upgraded version of the Ianvs framework and promote it to developers and researchers in the fields of edge computing and AI.

By implementing this project, we aim to provide crucial datasets and evaluation benchmarks for the further development of LLMs in the field of code intelligent agents, promote efficient collaboration between LLMs and software engineers in edge computing environments, and drive innovation and application of edge intelligence technology

Why is this needed:

Large Language Models (LLMs) have demonstrated powerful capabilities in tasks such as code generation, automatic programming, and code analysis. However, these models are typically trained on generic code data and often fail to fully leverage the collaboration and feedback from software engineers in real-world scenarios. To construct a more intelligent and efficient code ecosystem, it is necessary to establish a collaborative code dataset and evaluation benchmark to facilitate tight collaboration between LLMs and software engineers. This project aims to build a collaborative code intelligent agent alignment dataset and evaluation benchmark for LLMs based on the open-source edge computing framework KubeEdge-Ianvs. This dataset will include behavioral trajectories, feedback, and iterative processes of software engineers during development, as well as relevant code versions and annotation information. Through this data, we will design evaluation metrics and benchmarks to measure the performance of LLMs in tasks such as code generation, recommendation, and analysis, fostering collaboration between LLMs and software engineers.

Recommended Skills:
Proficiency in large language model fine-tuning
Python programming skills
Preferably a background in software engineering (familiarity with formal verification is a plus)

Useful links:
https://www.swebench.com/

https://fine-grained-hallucination.github.io/

https://cloud.189.cn/t/36JV7fvyIv2q (访问码:evr9)

Federated Incremental Learning for Label Scarcity Scenarios based on KubeEdge-Ianvs

What would you like to be added/modified:

This issue aims to achieve federated incremental learning in the case of sparse samples in KubeEdge-Ianvs, combining the advantages of existing federated semi-supervised learning and federated incremental learning methods, including but not limited to:

  1. Implement a benchmark test of federated semi-supervised incremental learning in KubeEdge-Ianvs, using dataset CIFAR100 and ILSVRC2012, and measurements such as accuracy, forgetting rate, etc.
  2. Propose a federated semi-supervised incremental learning method with its benchmark results.

Why is this needed:

In edge environments, data continuously arrives at edge devices over time, with the number of categories contained within it increasing steadily. Due to the cost associated with labeling, only a small fraction of this data is labeled. To leverage this data for model optimization, collaborative distributed model training can be conducted among edge devices through federated learning. However, traditional federated learning only considers supervised learning in scenarios where data remains static, thus it cannot effectively train on dynamically changing datasets with sparse labeling.

This issue aims to fully utilize streaming sparse labeled data from different edge devices, employing federated learning to conduct distributed training of models. This approach will mitigate the catastrophic forgetting in models in scenarios of class-incremental learning, thereby enhancing the model's generalization ability.

Recommended Skills:
Deep learning, Python, KubeEdge-Ianvs

Useful links:
Introduction to Ianvs
Quick Start
How to test algorithms with Ianvs
[NeurIPS'22] SemiFL: Semi-Supervised Federated Learning for Unlabeled Clients with Alternate Training
[CVPR'22] Federated Class-Incremental Learning

Inconsistent with interface name 'initial_model_url' and environment variable 'base_model_url'

What would you like to be added/modified:
While configuring algorithm.yaml, the pre-trained model url is specified as initial_model_url, however in test_algorithms/basemodel.py it is parsed and renamed as base_model_url.

Why is this needed:
This inconsistency might be confusing for algorithm developers. May I possibly recommend revising these two names to the same one rather than keeping this inconsistency.

Couldn't run example in curb-detection/lifelong_learning_bench

What happened:
I tried to run the example in curb-detection/lifelong_learning_bench. I followed every instructions in Quick Start. It went well in the training part and evaluation part, but when it came to the inference part, an error was occurred.

The log information was as follows:
[2023-03-02 22:20:31,947] task_evaluation.py(69) [INFO] - real_semantic_segamentation_model scores: {'accuracy': 0.13164643646434354}
[2023-03-02 22:20:31,966] lifelong_learning.py(395) [INFO] - Task evaluation finishes.
[2023-03-02 22:20:31,967] lifelong_learning.py(398) [INFO] - upload kb index from index.pkl to /ianvs/lifelong_learning_bench/workspace/benchmarkingjob/rfnet_lifelong_learning/ef4da0e4-b903-11ed-bb65-02420a00300a/output/eval/1/index.pkl
Traceback (most recent call last):
File "/ianvs/project/ianvs/core/testcasecontroller/testcase/testcase.py", line 74, in run
res, system_metric_info = paradigm.run()
File "/ianvs/project/ianvs/core/testcasecontroller/algorithm/paradigm/lifelong_learning/lifelong_learning.py", line 100, in run
inference_results, unseen_task_train_samples = self._inference(
File "/ianvs/project/ianvs/core/testcasecontroller/algorithm/paradigm/lifelong_learning/lifelong_learning.py", line 153, in _inference
res, is_unseen_task, _ = job.inference(data, **kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/sedna/core/lifelong_learning/lifelong_learning.py", line 453, in inference
seen_samples, unseen_samples = unseen_sample_recognition(
TypeError: call() got an unexpected keyword argument 'mode'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/ianvs/project/ianvs/core/testcasecontroller/testcasecontroller.py", line 54, in run_testcases
res, time = (testcase.run(workspace), utils.get_local_time())
File "/ianvs/project/ianvs/core/testcasecontroller/testcase/testcase.py", line 79, in run
raise Exception(
Exception: (paradigm=lifelonglearning) pipeline runs failed, error: call() got an unexpected keyword argument 'mode'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/ianvs/project/ianvs/core/cmd/benchmarking.py", line 37, in main
job.run()
File "/ianvs/project/ianvs/core/cmd/obj/benchmarkingjob.py", line 88, in run
succeed_testcases, test_results = self.testcase_controller.run_testcases(self.workspace)
File "/ianvs/project/ianvs/core/testcasecontroller/testcasecontroller.py", line 56, in run_testcases
raise Exception(f"testcase(id={testcase.id}) runs failed, error: {err}") from err
Exception: testcase(id=ef4da0e4-b903-11ed-bb65-02420a00300a) runs failed, error: (paradigm=lifelonglearning) pipeline runs failed, error: call() got an unexpected keyword argument 'mode'

Anything else we need to know?:
The environment is Ubuntu18.04, python3.8.5

Cloud-Robotic AI Benchmarking for Edge-cloud Collaborative Lifelong Learning

What would you like to be added/modified:
Based on real-world datasets provided by industry members of KubeEdge SIG AI, the issue aims to build a lifelong learning benchmarking on KubeEdge-Ianvs. Namely, it aims to help all Edge AI application developers to validate and select the best-matched algorithm of lifelong learning. It includes:

  1. Work together to release a new dataset to the public!
  2. Implement critical algorithm or system metrics, e.g., BWT, FWT and throughput;
  3. (Optional) Develop a baseline algorithm for this benchmark;

Why is this needed:
It is estimated that by 2025, 75% of the world's data will be generated at the edge, and the computing power on the cloud will be more abundant. Edge-cloud collaborative artificial intelligence will become an inevitable trend, and its demand will be further released. Among them, the global service robot market is expected to reach 90-170 billion US dollars in 2030. The use of cloud-native edge computing and artificial intelligence technology to deal with the issues of the robot industry and complete industrial transformation has also become the focus of the industry.

In recent years, lifelong learning-related algorithms such as Lifelong SLAM and Lifelong Object Detection have become popular for the problem of edge-data heterogeneity and small samples, but the real-world practise requires further considerations on its edge-cloud collaborative nature. To further accelerate research and results transformation, the KubeEdge community released the first open source edge-cloud collaborative lifelong learning framework and its resource orchestration template on KubeEdge-Sedna in June 2021. Moreover, the collaborative AI benchmarking KubeEdge-Ianvs in July 2022 is also released with related benchmark datasets and compute metrics.

This project aims to develop the edge-cloud collaborative lifelong learning benchmarking that are suitable for robotic scenarios based on KubeEdge-Ianvs. This project will help all Edge AI application developers to validate and select the best-matched algorithm of lifelong learning. The benchmark can include dataset, metrics and algorithm. Specific applications include but are not limited to robot navigation, inspection, cleaning, delivery, etc. KubeEdge SIG AI has already prepared real-world datasets for everyone to explore!

Recommended Skills:
TensorFlow/Pytorch, Python

Useful links:
Introduction to Ianvs
Quick Start
How to test algorithms with Ianvs
Testing incremental learning in industrial defect detection
[Opensource Summit Japan] From Groud to Space: Cloud-Native Edge Machine-Learning Case Studies with Kubeedge-Sedna
[ACM e-Energy'22] Towards Lifelong Thermal Comfort Prediction with KubeEdge-Sedna
[ACM CIKM'22] Towards Edge-Cloud Collaborative Machine Learning: A Quality-aware Task Partition Framework
[KubeEdge云原生边缘计算公开课] 边缘智能进阶:适配多样场景和应对分布式系统
[KEAW'22] 边云协同终身学习在智慧园区及工业领域创新探索及落地

Real-Time IoT Perception Systems Based on Edge-Cloud Collaboration with Large Foundation Models

Introduction or background of this discussion:
OSPP project: "Real-Time IoT Perception Systems Based on Edge-Cloud Collaboration with Large Foundation Models"

Contents of this discussion:
Project Output Requirements:

  1. Develop a Real-Time Perception Application System Based on Large Foundation Models on KubeEdge-Sedna (i.e., an edge-cloud collaboration platform), which supports efficient generalization capabilities.
  2. The performance of the system will be tested on actual edge platforms (optional).

Project Technical Requirements:

  1. Proficient in Python and have a basic understanding of edge platforms such as the NVIDIA Jetson series.
  2. Familiar with at least one artificial intelligence framework and capable of developing algorithms and deploying them to edge platforms.

Project Description:
Real-time perception systems are an essential component of intelligent Internet of Things (IoT) devices such as industrial robots and household mobile devices. When performing basic perception tasks such as object detection, the limited resources of edge platforms pose great challenges to the accuracy and adaptiveness of models. For example, in the case of object detection, new environments bring new object classes and states, and the small models that edge platforms can only recognize limited-domain information. Currently, large foundation models represented by CLIP and GPT are widely recognized for their superior generalization ability. It is an important research direction to enable small models on edge platforms to achieve efficient and real-time IoT perception applications through the framework of edge-cloud collaboration with foundation models.

运行reid_job示例时候报错,有遇到过吗

Traceback (most recent call last):
File "/root/miniconda3/envs/py38/bin/ianvs", line 33, in
sys.exit(load_entry_point('ianvs==0.1.0', 'console_scripts', 'ianvs')())
File "/root/miniconda3/envs/py38/lib/python3.8/site-packages/ianvs-0.1.0-py3.8.egg/core/cmd/benchmarking.py", line 37, in main
File "/root/miniconda3/envs/py38/lib/python3.8/site-packages/ianvs-0.1.0-py3.8.egg/core/cmd/obj/benchmarkingjob.py", line 93, in run
File "/root/miniconda3/envs/py38/lib/python3.8/site-packages/ianvs-0.1.0-py3.8.egg/core/testcasecontroller/testcasecontroller.py", line 58, in run_testcases
RuntimeError: testcase(id=eac637fa-c124-11ee-b0ba-694f39bcabcf) runs failed, error: (paradigm=multiedgeinference) pipeline runs failed, error: 'MultiedgeInference' object has no attribute 'modules_funcs'

Domain-Specific Large Model Benchmarking Based on KubeEdge-Ianvs

What would you like to be added/modified:
Based on existing datasets, the issue aims to build a benchmark for domain-specific large models on KubeEdge-Ianvs. Namely, it aims to help all Edge AI application developers validate and select the best-matched domain-specific large models. This issue includes:

  1. Benchmark Dataset Map: A mapping document, e.g., a table, includes test datasets and their download method for various specific domains.
  2. Large Model Interfaces: Integrates open-source benchmarking projects like OpenCompass. Provides model API addresses and keys for online large model invocation.
  3. Domain-specific Large Model Benchmark: Focuses on NLP or multimodal tasks. Constructs a suite for the government sector, including test datasets, evaluation metrics, testing environments, and usage guidelines.
  4. (Advanced) Industrial/Medical Large Model Benchmark: Includes metrics and examples.
  5. (Advanced) Efficient Evaluation: Enables concurrent execution of tasks with automatic request and result collection.
  6. (Advanced) Task Execution and Monitoring: Visualizes the large model invocation process.

Why is this needed:
As large models enter the era of scaled applications, the cloud has already provided infrastructure and services for these large models. Relevant customers have further proposed targeted application requirements on the edge side, including personalization, data compliance, and real-time capabilities, making AI services with cloud-edge collaboration a major trend. However, there are currently two major challenges in terms of product definition, service quality, service qualifications, and industry influence: general competitiveness and customer trust problems. The crux of the matter is that the current large model benchmarking focuses on assessing general basic capabilities and fails to drive large model applications from an industry or domain-specific perspective.

This issue reflects the real value of large models through industry applications from the perspectives of the domain-specific large model and cloud-edge collaborative AI, using industry benchmarks to drive the incubation of large model applications. Based on the collaborative AI benchmark test suite KubeEdge-Ianvs, this issue supplements the large model testing tool interface, provides matching test datasets, and constructs large model test suites for specific domains, e.g., for governments.

Recommended Skills:
KubeEdge-Ianvs, Python, LLMs

Useful links:
Introduction to Ianvs
Quick Start
How to test algorithms with Ianvs
Testing incremental learning in industrial defect detection
Benchmarking for embodied AI
KubeEdge-Ianvs
Example LLMs Benchmark List
Ianvs v0.1 documentation
(**)国家标准计划《人工智能 预训练模型 第2部分:评测指标与方法》及政务大模型、工业大模型等标准化文件

Personalized LLM Agent based on KubeEdge-Ianvs cloud-edge collaborative lifelong learning

What would you like to be added/modified:
Research benchmarks for evaluating LLM and LLM Agent
Develop a personalized LLM Agent using lifelong learning on the KubeEdge-lanvs edge-cloud collaborative platform

Why is this needed:
Large Language Models (LLMs) have garnered widespread attention due to their exceptional reasoning abilities and zero-shot capabilities. Among these, the LLM Agent is viewed as a significant practical application of LLMs in the physical world. An LLM Agent can achieve various complex tasks in the physical world through task planning, tool usage, self-reflection, and task execution. This project aims to develop a personalized LLM Agent by utilizing a cloud-edge collaborative framework, combining responses from large cloud-based models with those generated from privacy-sensitive data on edge devices. We plan to develop a personalized LLM Agent based on the KubeEdge-lanvs cloud-edge collaborative platform for lifelong learning. This system will be capable of integrating the generalization capabilities of large cloud-based LLMs with personalized user data on edge devices to generate high-quality and personalized responses.

Recommended Skills:
LLMs, Python, KubeEdge-Ianvs

Useful links:
Introduction to Ianvs
Install of Ianvs and Introduction to Lifelong Learning
HuggingGPT: Solving AI tasks with chatgpt and its friends in hugging face, NeurIPS '24
TaskBench: Benchmarking Large Language Models for Task Automation

[ADVICE]Add a simple QuickStart Example

What should be added/modified:

The current QuickStart examples often require a variety of AI-related environments, but these environments may not be necessary during actual use. Moreover, the process of installing and configuring these environments is quite cumbersome. I believe that the project should have a simpler example to get started quickly, just like the MNIST handwritten digit recognition task in CNN.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.