Name: Runyi Yang
Type: User
Company: Imperial College London; AIR Tsinghua University
Bio: I am an MRes student at Imperial College London, advised by Dr. Tolga Birdal, focusing on 3D vision.
Location: London, United Kingdom
Blog: https://runyiyang.github.io/
Runyi Yang's Projects
This is an updated Bingham version with torch 2.0.0, cuda 11.7, gcc-9, g++-9
implicit field of scenes
A Python API for large-scale graph-based method point cloud downsampling.
Implement a handwritten digit recognition function using MNIST dataset with the simplest neural network.使用MNIST数据集,用最简单的神经网络实现一个手写数字识别功能。
This is the repo of my course Machine Learning, the main course work is EEG data processing.
A collaboration friendly studio for NeRFs
📖 Paper reading notes in computer vision and machine learning, especially 3D SLAM, Implicit Representation and semantic segmantation. (constantly updating!!!) Everyone is welcomed to share your ideas and comments, I would greatly appreciate it if you can point out some of my mistakes or answer my questions.
Runyi Yang's Python Learning code at Imperial.
Runyi Yang's Homepage(杨润一的个人主页)
Image style transfer solves the style transfer image based on the given target style image and the target content image, so that the style transfer image is consistent with the target style image in terms of style and the target content image in terms of content. Image style migration is divided into non-real-time style migration and real-time style migration. Non-real-time style migration only styles the current given target content image, which is relatively simple to implement, but requires training for each input content image. In contrast, real-time style migration trains a model to generate a stylized image for any content image, which is relatively complex to implement. Style migration usually uses a VGG model (e.g., VGG19) to extract the features of the image, and then calculates the style (content) loss of the style migrated image and the target style (content) image as the style (content) loss function. In the non-real-time style migration, the style (content) loss is calculated and back-propagated to obtain the gradient of the style migration image and update the style migration image. Through several iterations, the style (content) loss between the style-migrated image and the target style (content) image is continuously reduced, and the stylized image is finally obtained. In the non-real-time style migration, the target content image with noise is usually used as the initial style migration image.
The implementation of SUNDAE: Spectrally Pruned Gaussian Fields with Neural Compensation
VGG 19 is a Very Deep Convolutional Networks for Large-Scale Image Recognition In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.We use the network with pretrained parameters to classify images on ImageNet.
目前,随着大学专业和课程的多样化,以及智能化信息网络的普及(移动端和电脑端),学生在选课时的需求和方法(如抽签选课、公共选修课选课、双学位和跨专业选课等功能)也逐渐多样化。已有的选课系统仍然存在着一定的不足,如系统运行不够稳定,选课逻辑混乱,操作页面不够友好,教师端设计优化不够完善等。依托于现代网络技术的环境,基于已有的网页选课系统进行优化和重构,本次项目开发旨在得到一套完整逻辑、易于操作、运行稳定的智能选课系统。