Giter Site home page Giter Site logo

Comments (16)

dbalsom avatar dbalsom commented on May 24, 2024 1

Thanks for the kind words. I've been pretty busy refactoring the hell out of MartyPC's video system and I did come up with a basic 'CRT shader' as just sort of an example

mono_shader

no halation or phosphor persistence or bezel reflections or anything fancy like that, yet. Just basically a demo to show that yes, I can do fragment shaders in the scaler pipeline now.

More on topic for the original issue, we are now doing aspect correction in the shader, so that's one less thing that slower systems have to deal with.

from martypc.

dbalsom avatar dbalsom commented on May 24, 2024

Based on the performance view, your emulation time is okay but there is too much time spent in framebuffer processing. I am using a crate to resize the image for aspect correction that attempts to use vector instructions on supported CPUs, I wonder if we are getting into a situation where this is not performant on the amlogic.

Please try this for me to verify: Go to options, Display, and uncheck 'correct aspect ratio', lets see if the performance improves.

from martypc.

phil2sat avatar phil2sat commented on May 24, 2024

Bildschirmfoto vom 2023-09-17 09-17-08

Yes this tripled the FPS (9-11) as you can see in the picture, but still using only 12% cpu ressources.
I dont know that are good values for Framebuffer time?

from martypc.

dbalsom avatar dbalsom commented on May 24, 2024

100/8 is 12.5% so that indicates we are maxing out a single core. Since MartyPC is mostly single threaded this means it is running as fast as it currently can.

While it is true that MartyPC could potentially utilize threads more effectively, it is hard to split the workload of tightly integrated and clock-synchronized system like the PC across multiple threads. It will take some research. One thing I think I can do is split the GUI into its own thread to minimize the impact of drawing debugger windows on performance, but I don't think that will help you here.

There are two things going on that I can see from the performance view.

The time spent drawing a frame is a sum of Emulation Time, FrameBuffer Time, and Gui Render Time. To reach the CGA's render target of 60FPS we must complete all in < 16.6 ms. MartyPC will start throttling once you exceed 15 ms; this is to keep the application GUI somewhat responsive when your computer can't keep up. Otherwise you start getting those 'application not responding' warnings from the OS. Now framebuffer processing is no longer the bottleneck, but our emulation time is now 14ms which is quite high. This value is more like 3 on my system.

Also the core clock of this system is pretty low at 1.5 Ghz, which is not helping.

The second issue is UPS; these are the number of window updates sent to MartyPC per second by the host window manager. This value is typically at or above the system refresh rate, a value of '9' is suspiciously low. MartyPC can only draw a frame to the screen during a GUI update, and if we only get 9 updates per second, well, we can only get 9 fps. I usually see low UPS when run under virtualization, over RDP or remote X11 so make sure you are not doing that.

If you are willing to use cargo-flamegraph to profile MartyPC on this system I can take a look at the hotspots, but I am afraid that for the moment I will have to say this hardware is unsupported. I am planning a lighter front-end using SDL in the future that might be more performant on systems like this. More general optimization is needed. It would be my goal to run on a Raspberry Pi 4, which I think is achievable.

Just out of curiosity, what is the exact hardware being used here? I googled the CPU and I see it is used in AndroidTV boxes and such.

from martypc.

phil2sat avatar phil2sat commented on May 24, 2024

So since the Amlogic is quite similar to the Pi4 this could be the first approach to get it running.

I can run a Full Amiga 1200 68040 with MMU and JIT at Sysinfo 163x speed, also the holy grail "Agony" plays without sound stuttering.

I can even play Minecraft on it, kinda, with some Mods i plays at around 20FPS but the GPU is limiting here some other Boxes S905x achive 60-70FPS with vendor blobs.

And i use Amiberry for Emulation, the Pi-Amiga Emulator.

This is a "Mecool KIII-Pro" and i switched to FireTV sticks, so i had no longer any use for it. The Hardware is quite fast even if its around 5 yrs old.
image

Linux 6.1.39-1-MANJARO-ARM-AML with complete Gnome Window Manager.
Eight-Core S912, Mali T-820 GPU, 2,7G usable Ram, 4x USB 2.0, Gigabit Ethernet, dual band 802.11 b/g/n/ac Wi-Fi, and Bluetooth 4.0
model name : ARMv8 Processor rev 4 (v8l)
BogoMIPS : 48.00
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid

I tried a flamegraph but im unsure if i did it right:

cargo flamegraph --bin=martypc -- --configfile /home/phil2sat/ibm-xt/martypc.toml

 perf record: Woken up 3591 times to write data ]
Warning:
Processed 250572 events and lost 8358 chunks!

Check IO/CPU overload!

Warning:
Processed 318590 samples and lost 56,87%!

[ perf record: Captured and wrote 2205,052 MB perf.data (137399 samples) ]

flamegraph

UPS wise it's running at 59,9Hz so i dont understand that value

Bildschirmfoto vom 2023-09-18 12-56-17

from martypc.

dbalsom avatar dbalsom commented on May 24, 2024

You don't need to try to convince me that the hardware is capable of emulating an IBM PC, I am quite sure that it is, the question is just can the hardware run MartyPC in its current state, and the answer currently appears to be no. There are i7 systems (admittedly quite old) that struggle with it.

AmiBerry uses a JIT to recompile instructions into native code, MartyPC is a pure, uncached, basically unoptimized interpreter. Interpreters will always be much slower than JITs, but I don't know how to write a JIT.

Regarding comparisons to other emulators, let's put things into perspective. Dosbox is 21 years old. AmiBerry is based on WinUAE which is based on UAE. UAE's first release was in 1995, meaning that AmiBerry benefits from nearly 28 years of Amiga emulation experience.

Dosbox-X, just a single fork of Dosbox has 110 contributors on github. WinUAE 5.0 has 25 people listed in its Contributors, and the AmiBerry repro on github has 34 contributors.

MartyPC has one author and is a year and a half old and is the author's first emulator. Temper expectations accordingly.

Right now I have been more focused on being 'correct' than being 'fast'. If you browse through my issues there are still things that don't run, so that mission isn't complete.

If we consider though the poor performance of the FIR crate I wonder if it would be worth opening an issue over there. The repo for FIR is https://github.com/cykooz/fast_image_resize , and there are some benchmarks in that crate you could compile and run and compare to their published results for arm, you may have uncovered a bug; it should not take 15ms to resize a CGA-sized bitmap, especially when FIR is supposed to be very well optimized.

Let's take a peek at the flamegraph.
image

MartyPC's core is on the left, the graphics libraries I am using are in blue and green. This represents 2/3 of execution time which is outside of my control. The main culprit I think is wgpu; it is primarly targeted at Vulkan and DX, based on the performance view you are using the Gl backend, and I don't know how much love or attention that receives. I do know wgpu performs poorly on intel GPUs, and it does not surprise me that it would perform poorly on a Mali chipset as well. This is where a software-based frontend like SDL would benefit the most.

You can see that the biggest function is 'memcpy' within meson; while I am no graphics pipeline expert, this implies to me that there is some performance issue with copying the framebuffer to graphics memory or such.

One thing that would be interesting is for you to try the wasm build of MartyPC.

https://dbalsom.github.io/martypc/web/player.html?title=area5150
https://dbalsom.github.io/martypc/web/player.html?title=8088mph

do they run significantly better, worse, or the same?

from martypc.

phil2sat avatar phil2sat commented on May 24, 2024

Just for your info, my i7-4790k with R290x using Vulkan, from 2014 is running martypc at full 60fps. (Just tested)

I know martypc is new and its more than cool that it even runs on a Android-Settop-Box from 2018, hacked to linux.

Since the Panfrost-Vulkan stack currently doesn't support the T-820 GPU i could give swrast a try (the other cores could do something useful).
Also i will compile the benchmarks, maybe somewhere in your build the NEON for Arm switch has gone, since i saw SSE4.1, AVX2, and Neon, is supported.

Next i just upgraded the Box's system to check if there is any performance progress, Panfrost for Mali GPU's is also relatively new ~1-2 yrs.
EDIT: glmark does a +100FPS step after update from 32 to 132 thats nice, but martypc stays with same values, i will further investigate

For myself, im no developer, dumb user, but if there is something i can help testing, im in...

from martypc.

phil2sat avatar phil2sat commented on May 24, 2024

For the crate to resize, i guess i foud the issue:

make a cpuflag check for Avx2 Sse4_1 or Neon (if aarch64) for
frontend_libs/render/src/resize.rs:62:59

use fast_image_resize as fr;

fn main() {
    let mut resizer = fr::Resizer::new(
        fr::ResizeAlg::Convolution(fr::FilterType::Lanczos3),
    );
    unsafe {
        resizer.set_cpu_extensions(fr::CpuExtensions::Sse4_1);    #< -- enable unsafe functions to use CPU features
    }
}

It's completely unused in your source and everybody with a compatible CPU could gain some performance from using it.
the new render.rs also has a check integrated for cpuflags..

https://raw.githubusercontent.com/Cykooz/fast_image_resize/24edd65eef20596e51c23f84db79474a900e2d18/src/resizer.rs

But after compilation it does nothing better :(

from martypc.

dbalsom avatar dbalsom commented on May 24, 2024

I recall not turning that on because just the base implementation was about 5x faster than my original routine, which I was pleased with, and the conditional compilation was a bit clunky, but it's probably something I should do at some point. I'm still curious as to why it is so slow

from martypc.

phire avatar phire commented on May 24, 2024

Well, the major problem is that memcopies are just very slow on that SoC, especially copies going to the GPU. This is partly because it only has slow Cortex A-53 cores (running at 1.5ghz) partly because it only has 3.3 GB/s of memory bandwidth (according to the benchmark I found).

I'm suspicious there might be a bug in wgpu, that's making things worse, but maybe that SoC is really just that slow at memory copies.

Taking a look at the rendering code, you are copying way too much data to the GPU. The framebuffer that gets send to the GPU is 32bit color, line doubled with an optional 2x scaling, all done on the CPU. That's a 1280x800x32bit buffer, or 4MB. To write that 4MB buffer out, memcopy it twice, and then have the GPU read it back in at 60fps, would require 1.4GB/s of memory bandwidth, almost half of the memory bandwidth gone. (and then the GPU writes it out again at least once more, twice if you are using a compositing window manager)

Ideally, you would send just the raw video memory contents (usually 16KB or less of data) every frame and do everything with a single pass pixel shader, but at the very least you should aim to send only a 640x200x32bit (512KB) frame buffer and do the line doubling, 2x scaling and any aspect ratio correction in a single-pass pixel shader. Looks like the pixels library already does 2x scaling, it just needs support for custom pixel aspect ratios. I could also see an argument for adding custom color palette support to pixels.

The other problem is that the two memcopies are stupid. Pixels really should be using mapped buffers rather than textures, and so your rendering code can write directly into GPU memory.

from martypc.

dbalsom avatar dbalsom commented on May 24, 2024

Well, the major problem is that memcopies are just very slow on that SoC, especially copies going to the GPU. This is partly because it only has slow Cortex A-53 cores (running at 1.5ghz) partly because it only has 3.3 GB/s of memory bandwidth (according to the benchmark I found).

I'm suspicious there might be a bug in wgpu, that's making things worse, but maybe that SoC is really just that slow at memory copies.

Taking a look at the rendering code, you are copying way too much data to the GPU. The framebuffer that gets send to the GPU is 32bit color, line doubled with an optional 2x scaling, all done on the CPU. That's a 1280x800x32bit buffer, or 4MB. To write that 4MB buffer out, memcopy it twice, and then have the GPU read it back in at 60fps, would require 1.4GB/s of memory bandwidth, almost half of the memory bandwidth gone. (and then the GPU writes it out again at least once more, twice if you are using a compositing window manager)

Ideally, you would send just the raw video memory contents (usually 16KB or less of data) every frame and do everything with a single pass pixel shader, but at the very least you should aim to send only a 640x200x32bit (512KB) frame buffer and do the line doubling, 2x scaling and any aspect ratio correction in a single-pass pixel shader. Looks like the pixels library already does 2x scaling, it just needs support for custom pixel aspect ratios. I could also see an argument for adding custom color palette support to pixels.

The other problem is that the two memcopies are stupid. Pixels really should be using mapped buffers rather than textures, and so your rendering code can write directly into GPU memory.

Thanks for the analysis. Your point about rendering the screen in a shader from the raw RGBI output of the emulated CGA is a great idea. Once I'm done with my current efforts in squeezing out the last bits of cycle-accuracy from the 5150 sniffer, I am going to tackle a rework of rendering. Whether that includes Pixels in the picture anymore, I am not sure. 0.2.0 will probably still use it; simply because it's going to be a lot of work to replace and the next update has been delayed for far too long as it is.

from martypc.

mdrejhon avatar mdrejhon commented on May 24, 2024

MartyPC has one author and is a year and a half old and is the author's first emulator. Temper expectations accordingly.

Understatement the century.

In a mere 1.5 years, you singlehandedly:

  • Created the reference IBM PC 5150 emulator, cycle exact
  • Created the reference IBM CGA emulator, cycle exact that works with recently discovered CGA secrets
  • The only emulator that runs 8088MPH and Area5150 perfectly
  • The demoscene unamiously crowned you the reference emulator of the platform
  • All multiplatform, already crosscompiled one click away in JavaScript
  • All while being the first emulator program you written
  • All while being the first-ever Rust application you written
  • All as a sole developer, with codebase nigh completely written from scratch
    (even if helped by other emulator knowledge and generic general purpose Rust objects)

Neither DosBox nor WinUAE was remotely this good and flexible 1.5 years after their respective creations. Not by an intergalactic margin. Not even the many DosBox forks after 1.5 years of fork creation. Not even the many WinUAE forks after 1.5 years from fork creation.

Oh boy.

Your emulator is the standing ovation.

This is what inspired me to learn Rust programming recently (for other non-emulator projects).

from martypc.

mdrejhon avatar mdrejhon commented on May 24, 2024

Thanks for the analysis. Your point about rendering the screen in a shader from the raw RGBI output of the emulated CGA is a great idea. Once I'm done with my current efforts in squeezing out the last bits of cycle-accuracy from the 5150 sniffer, I am going to tackle a rework of rendering. Whether that includes Pixels in the picture anymore, I am not sure. 0.2.0 will probably still use it; simply because it's going to be a lot of work to replace and the next update has been delayed for far too long as it is.

For TestUFO I have been working to create a CRT filter for some teaching tools (some TestUFO patterns are teaching tools). Eventually I'm going have a CRT electron beam simulator filter (using brute Hz and rolling BFI with phosphor fadebehind) in JavaScript. Long term, we'll have 1000Hz OLEDs using 16 digital refresh cycles to generate 1 analog 60Hz refresh cycle in a electron beam simulator shader, so it's kinda a motion-blur-reduction holy grail for me.

I already helped the creator of the RetroTink 4K video processor do box-in-middle BFI for retro users, with superlative results, so you can have box-in-middle CRT electron beam simulators in a video processor box (a progression from monolithic BFI to advanced rolling BFI).

So CRT tube simulation is a big topic.

Most people focus on spatial simulation (CRT texture), but my focus is temporal simulation (CRT electron beam + phosphor + low motion blur), which actually works (Digital Foundry confirms on Twitter).

The Holy Grail for me (Blur Busters) is true spatial AND temporal simulation of CRT, and that's why I am absurdly excited about the upcoming 360Hz and 480Hz OLEDs, which makes this practical to migrate from monolithic BFI to true beam simulators.

I am new to shaders, but ShaderToy + an AI tutor (Use gpt-4-1106-vision or better, not the ChatGPT junk) has been a big help teaching me how to write shaders without taking over shader writing for me. I'm sure you don't need AI to be a programming tutor if you learned Rust in the pre-AI-tutor days. You learned Rust from scratch, shader programming is pretty easy for monolithic 2D operations like a CRT filter.

from martypc.

mdrejhon avatar mdrejhon commented on May 24, 2024

Thats OK!

Temporal CRT simulation isn't really needed for IBM PC/XT stuff, because most stuff aren't fast-scrollers. The CRT attribute of motion blur reduction works best with fast scrollers, like Super Mario, Sonic Hedgehog, anything that creates motion blur on LCDs but not on CRTs.

But anyway, if any emulator author ever need temporal-shader-sim help -- my tip is get a 240Hz+ OLED -- and then reach out to me for some advice on how to implement the algorithm.

CRT Electron Beam Simulator Formulas are Simpler Than Expected In Some Ways / Harder Than Expected In Other Ways

CRT electron beam simulators are actually distillable to a single math formula per scanline, something that shaders are capable of doing. It can even be converted to a lookup table, for even lower-end GPUs or pure software (aka JavaScript with no GPU). The concept is a modified CRT shader formula that accepts input of various variables:

  • "Phosphor Decay" (programmable)
  • Known gamma correction curve (usually 2.2) of a display (a bit more complicated for hDR).
  • "Time offset into a CRT refresh cycle" or "Current CRT Electron Beam Scanline Number corresponding to this specific destination refresh cycle" variable which helps the formula compensate for the raster offset relative to the phosphor fade. Since you're refreshing the same emulator frame multiple times, at different refresh offsets for a CRT tube. (Bonus: This is beamraceable, for sub-refresh latency! E.g. 1ms lag on 1000Hz OLED relative to original CRT tube)

You do the computation only once per scanline: The output is different brightnesses per scanline number for the pixel row

(Note: has to be gamma-corrected, since RGB(64,64,64) is not half the number of photons as RGB(128,128,128), so gamma curve formula needed in the shader too for this sort of stuff.)

So for a 60Hz CRT software-based beam simulator outputting to a 960Hz OLED (2027), you'd have 16 different CRT beam position values used on the same unchanged emulator framebuffer (or post-CRT-filtered framebuffer). The formula would produce the "rolling gradient band" output, complete with phosphor fade. Looks like a single frame of a high speed video of a CRT tube.

In theory it's simply an additional layer added to an existing CRT filter. I don't bother to worry about horizontals, just a per-scanline brightness compute, depending on current raster CRT position you give the shader, and it outputs a whole frame (like a time-offsetted 1/1000sec photograph of a CRT tube, relative to VBI). Simpler than I expected! Then later optimizing can make it output just a frameslice, so you can just beamrace it (e.g. your 480Hz OLED is ACTUALLY displaying the top edge of your emulator framebuffer, even before emulator has finished the bottom of the emulated screen!!!!) In this case, you don't need VSYNC OFF beam racing, it's just full-output-framebuffers for destination-refresh-cycles, simplifying CRT beam simulator beam racing to a crossplatform endeavour.

But anyway, this is only needed for fast-motion emulators that needs the large amounts of motion blur reduction without the bad flicker associated with monolithic BFI or monolithic strobe backlights. The rolling-scan has to be shingled on both edges to compensate for phosphor fade, and to prevent tearing artifact between adjacent display refresh cycles. It's pretty hard gamma-corrected balancing the average luminance of a pixel between adjacent refresh cycles. You've got some problems with shingled/overlapped rolling-scan BFI. That's because you have a finite number of digital refresh cycles trying to emulate an analog refresh cycle. Every pixel need to be emitting the same number of photons per refresh cycle. Despite all the overlapped gradient bands of what looks like single-frame of a high speed video of CRT.

(Incidentally, a 1000fps high speed video of a CRT, played back to a laboratory 1000Hz display, looks kinda like the original CRT tube. Same concept)

Brute Hz has an absolutely magical way of bringing back the temporalfeel of a CRT tube, and I think this will be an emulator algorithm of the 2030s. I'm frankly hoping I'm going to be able to help a vendor implement it in some future box-in-middle video processor. I'd convert SDR->HDR and use the HDR nits booster (and the small tight window size of rolling BFI) to re-brighten very dark BFI, since CRT beam spot is superbright.

I should be moving this discussion to a separate GitHub request, since this is a knowledge I'd like to teach more emulator authors (who don't currently realize more Hz can help faithful 60Hz CRT simulation, without requiring native display-side BFI).

TestUFO CRT Simulator Demo for 240Hz+ OLEDs coming in 2024

I already have a prototype beam simulator, that I wrote offline, that only works barely-properly only on 240Hz OLEDs, and much better on upcoming 360Hz+ OLEDs. I'm going to publicize it eventually, probably 2024, in a TestUFO CRT Simulator test. It does not use a shader as I converted it to per-scanline brightness lookup tables that are precomputed upon changing a gamma setting, a phosphor fade setting, and the current ratio (crtHz:realHz). At 4:1 ratio, it looks barely better than normal BFI, and at 6:1+ ratio, it starts to look much better than monolithic BFI, and at bigger ratios (8:1 and 16:1) it starts looking CRT-realistic. I'll be ready 480Hz OLEDs 2025 and 1000Hz OLEDs 2027.

Some HDR bugs in some displays can produce weird horizontal-banding artifacts (like smartphone-filming a CRT tube) caused by unbalanced gamma-compensation of the rolling-scan shingling/overlapping beetween adjacent refresh cycles. So a compensation slider setting helps a bit (odd gamma curves along the vertical dimension)

I started this work 2 years ago, and originally I thought it required powerful computations, but it's just a per-scanline brightness lookup table, something so low compute that even pure JavaScript can do it in realtime on today's browsers (I got my CRT simulator running at 240fps at 240Hz in most GPU-accelerated browsers).

I may opensource the algorithms to help others add temporal CRT simulator formulas to their app, though.

Better Discussion Venue?

I did bring it up with VileR on his int10h blog, and we had a back and fourth:
https://int10h.org/blog/2021/01/simulating-crt-monitors-ffmpeg-pt-1-color/
He agreed my idea was practical, and I have further refined my works.

He's also the one who successfully got my JavaScript beam racing demo in a web browser working at https://www.testufo.com/raster51 -- true realraster beam racing works only on Windows platforms using a custom command line option, but works on AMD and NVIDIA! But my CRT beam simulator requires none of this complexity because I am just using brute Hz instead of in=out Hz. So software-based CRT beam simulators (combined with software-based beam racing, full frame of CRT simulation fragment) to end up simpler than the hardware-based beam racing.

Is there a better GitHub tracking item for me to copy and paste this discussion to? (Even if somebody else's emulator, I don't think MartyPC really needs temporal CRT simulation, but you're one of the few people who probably can understand this type of concept). Vogons is good, but I think I want to use somebody's emulator github issue tracker somewhere, since those are easy to share between multiple emulator authors just by referencings.

from martypc.

dbalsom avatar dbalsom commented on May 24, 2024

Is there a better GitHub tracking item for me to copy and paste this discussion to? (Even if somebody else's emulator, I don't think MartyPC really needs temporal CRT simulation, but you're one of the few people who probably can understand this type of concept). Vogons is good, but I think I want to use somebody's emulator github issue tracker somewhere, since those are easy to share between multiple emulator authors just by referencings.

I appreciate your passion for this topic and I do find it fascinating, even if I'm not yet in a position to really implement any of it yet. That said I an not sure an issue topic is the appropriate venue. I would prefer is issues were reserved for bug reports, and I get notifications when they are updated. Suggested features or technical discussions are more appropriate for a discussion topic.

I might suggest you create your own github repo, enable discussions, and link folks to there? Since it could be focused specifically, you could have multiple threads, like notes on the general technology, experiments you may be doing, or discussions about implementations for specific emulators. You could also use Github's basic Wiki feature to collect your documentation as I have.

from martypc.

mdrejhon avatar mdrejhon commented on May 24, 2024

That's a fantastic idea.

I will create my own "research repo". That is a solution that doesn't hijack.

What I really like is how GitHub pings both ways, when I refer to somebody else's github. It puts a link there. Which is why i want to create some discussion venue on github instead of some undiscovered forum thread. The automatic cross-repo linking is wonderful, and is why I prefer your suggestion than just some isolated forum silo;

That way, I can reference multiple emulator projects related entries (e.g. monolithic BFI or CRT filters), and a link automatically pop up over there as another github project that mentioned their issue.

So, sometime early 2024 I will probably create an incubation github for CRT electron beam simulator research -- even if (initially) it's only its own Issues Tracker section as a discussion venue. I'll edit, remove and replace the above posts with a link once I do this, but might have to wait until after the Holiday crunch work spell.

from martypc.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.