floooh / chips Goto Github PK
View Code? Open in Web Editor NEW8-bit chip and system emulators in standalone C headers
License: zlib License
8-bit chip and system emulators in standalone C headers
License: zlib License
Hello,
I build everything successfully on the command line. Few questions:
Thanks,
According to vic-ii.txt, the first c access in badlines happens in the second half of cycle 15, and the following cycles have gi accesses first, then c accesses. In m6569.h, there is no c access in cycle 15, and the following cycles have c accesses first, then gi accesses. Is this intentional? It also seems weird since ?i accesses should always happen as first half of the cycles (phi low) as that's the default VIC-II half of the cycle, and for cycles 1..10,58..63 this is what m6569.h does as well. @floooh
Pauses in animated into screen:
https://floooh.github.io/tiny8bit/cpc.html?file=cpc/chase_hq.sna&joystick=true
Music/gameplay slowing down on 3rd screen:
https://floooh.github.io/tiny8bit/cpc.html?file=cpc/cybernoid.sna&joystick=true
Must've been caused by any of those commits?
Not sure what could be the cause, but saving the state and loading it back is causing a glitch in Tiny Dungeons. This is before saving:
When the state is loaded back, the player character goes away:
I can move it around and play normally, but it's invisible now. My load and save routines are pretty straightforward: evaluate the pointer offsets, and dump the zx_t
structure after the magic number when saving, do the opposite when loading. It seems to be working just fine except for this glitch.
I decided to give this a try because the .z80 quick save that I have in a branch was also presenting the glitch, and I thought it could be related to the format not being able to capture the entire state of a Speccy emulator (a known limitation of the format). I tried the .z80 snapshot both compressed and uncompressed with the same result.
I tested with Fuse and it works ok.
Thanks in advance.
...see Zaxxon and Cybernoid.
it has subtleties and minefields.
It's also really common in a lot of machines of a certain era.
Valuable feedback from Arnaud Carré, maker of StSound:
it's tricky to see the difference, but the reality is really that. If
you disable tone, the output is 1. Then CPU can modulate register volume
to play "wav" files ( that is what chase hq is doing). This is a really
popular way of playing samples on Atari ST for instance. If you listen
to chasehq you can hear a small "click" when the sample ends ( because
all values are from 0 to 1 and then you go back from a -1 to 1 signal (
two times louder ).
You get same situation if you disable tone and play envelope ( often
used by atari musician to play bass sound)
The real fix is compute all your values between 0 and 1 ( and that is
really what a YM does, if you think of a tone output, it's 0 or
"volume"). You could play that on PC and it will sound correct.
Generally emulators keep track of "current audio level" and always
substract that (so you get a proper signal centered on 0 instead of
having only positive values in the final audio buffer).
...same solution is needed for beeper.h I think
...PS:
be carefull StSound sources are pretty old! A more recent and cycle accurate version could be this one: https://bulba.untergrund.net/emulator_e.htm
also the final ym output is not linear per voice ( you can't use final_out = volA + volB + volC ). At least on the ATARI ST ( it may be because some hardware filtering after the ym). Anyway some people has measured it on atari and have some volume table like float final_out[16][16][16]. I think the link I mentioned above use that technic for final output value.
Basically, I'm trying to XOR A and B together.
In this case, B contains 0xF1, while A contains 0xFF.
When XORed, A gets set to 0x00.
Oddly enough, the numbers directly around this work, so setting A to 0xFE results in the expected 0x0F.
Only 0xFF in A xored with seemingly any number in any register results in the same issue.
Here's a code example to replicate my findings:
LD A,0xFF
LD B,0xF1
XOR B
Downloaded the latest version of the Header too.
8088 had a 8bit data bus :D
On the ZX Spectrum pressing Caps Shift does not do anything. In 48k BASIC pressing CAPS SHIFT should toggle between the 'L' and 'C' flashing cursor.
Try typing some letters and they will be in lower case when the cursor is a flashing 'L'. Pressing CAPS SHIFT should make the cursor turn into 'C' and cause upper case letters to be produced when alphabetical keys are pressed.
I fixed this issue by adding the following line in zx.h in _zx_init_keyboard_matrix():
kbd_register_key(&sys->kbd, 0x0E, 0, 0, 0); /* CapsShift */
You load the opcode during the M1/T2 cycle but what happens if (for some reasons) the data from the data bus changes during that cycle? Shouldn't the opcode be loaded during M1/T3 so you're sure the value is the good one?
Thank you for your great work btw! ;)
in _vic20_tick()
I forced
via1_pins |= M6522_PA6;
just before the m6522_tick()
.
Thas should set to 1 the cassette SENSE
pin, but it doesn't seem to work. After I type LOAD
on the emulated VIC20, I get the message PRESS PLAY ON TAPE
, but I think I should get SEARCHING
instead.
I tried to follow the 6522 code, but it's just too complex for me 😞. I also tried to reset the bit in case of negated logic, but the result is the same.
I'm trying to implement .WAV file loading.
this is pretty hard, because of what happens when you jam a multibyte instruction in the instruction decoder. it pushes the PC of the next instruction, but then goes and fetches 2, 3, or 4 bytes from the bus with ill-defined bus cycles.
You can see pretty clear glitches in this demo using the C64 clock stepped emulator. (not sure how to explain how they look like, I think it'll be obvious if you try running it) Tried in both my own emulator build and using the tiny emulators web page.
https://csdb.dk/release/?id=118639
I have not tried this on any older emulator versions, maybe it's been bugged before.
VICE and real HW works fine on this .prg.
I wonder if I can somehow help debugging this? I'm pretty sure this demo uses VSP at least.. I guess that's pretty timing sensitive.
When using the 6502 CPU debugger:
...the CPC numberpad keys should be mapped to PC numberpad keys....
I'm trying to implement my own memory mapped I/O device on the C64. Something very simple: when I read the location 16384 a value is returned and the value itself is incremented for the next read.
I modified _c64_tick()
this way:
// ...
else if (mem_access) {
if (pins & M6502_RW) {
unsigned char data;
if(addr == 16384) {
/* read from my device */
data = kk++;
if(kk==(64+40)) kk=64;
}
else data = mem_rd(&sys->mem_cpu, addr); // normal read
M6502_SET_DATA(pins, data);
}
// ...
The issue I'm experiencing is that sometimes bytes (in groups of 3) are completely lost, they are not received by the CPU as shown in the picture below:
This is the assembly program I use to read the bytes:
pippo:
lda 16384
jsr $FFD2
jmp pippo
Is there something else I should consider in _c64_tick()
or is this unexpected behavior?
Thank you for your help.
In cpc_key_down/up
, if there's a CPC_JOYSTICK_DIGITAL
, the space key will be converted to the joystick button CPC_JOYSTICK_BTN0
, making it impossible to produce a space in the emulator.
I'd love to submit a PR but I have zero knowledge about the CPC and am unsure what the fix would be.
Just noticed on my Windows machine that Pengo has weird artefacts in some sound effects...
PS: same problem in the Pacman emulation (even worse there) so seems to be a general problem with the Namco sound emulation, only thing that's weird is that it only seems to be happening on Windows. Maybe MSVC specific?
For example:
writeHelloWorld:
; IX used as pointer
LD IX,0
; hello offset + pointer
LD A,(IX+hello)
; output on data bus to LCD
OUT (1),A
; increment IX
INC IX
JP NZ,writeHelloWorld
HALT
hello:
DEFM "Hello, world!",0```
Should result in the Chip outputting the ASCII Characters for "Hello, World!" onto the Databus.
Instead, only H is printed, then the program ends.
As the emulator can either load a disk image or a snapshot, this presents a problem if you wan't a snapshot to load directly into a CP/M program, as the disk is not accessible for saving/loading/etc.
Another use case is that you want the user to start on the title screen of a game to save load time, but you still want the disk image available to load subsequent levels.
In order to get around this limitation, please implement version 3 snapshots, which are able to also store the disks in the drives as well as Other Information. e.g.:
More information about the format can be found here.
I read your article 'One year of C' where you mentioned that the entire 8-bit emulator didn't have a single call to malloc.
How can I avoid malloc if I need a variable size structure like a c++ vector? Everytime I need a struct that can contain a variable number of elements, I use this pattern:
struct many {
int *values; // pointer to malloced buffer
int max_values;
int n_values;
};
I feel like I missed something.
I am building an arcade emulator, and using mem.h
to create my main memory map. This works very nicely ❤️
However, when I tried to use mem.h
to map some of the graphics ROMs, I realised that it is limited to addressing only 64KB. The Rygar tile ROMs are actually 128KB.
Is there any chance we could bump this up higher (maybe 1MB)? I tried editing mem.h
locally by increasing the MEM_ADDR_RANGE
and changing the addr
arguments to all be uint32_t
, which seemed to work fine.
Was there any reason you set the hard limit at 64KB? I realise that more pages will be created in the mem mapper if we bump the limit, but is there a performance penalty?
Just a terrible idea I thought I would send your way :).
Based on a comment in z80.h it looks like there are plans to implement the RESET pin, but I am curious if there are plans to emulate the BUSACK and BUSREQ pins (useful for Sega Mega Drive).
Line 342 in 05cd84e
This line looks incorrect for me.
Let me take an example. SHARP MZ-700 has an LSI which handles PAL/NTSC video circuit and also decodes address bus for dispatching between DRAM, ROM, peripherals and VRAM. If CPU is accessing the video RAM ($D000-$DFFF) and there is no horizontal blank currently then /WAIT is set to 0 to pause the CPU and avoid a VRAM access conflict. So, that /WAIT is set to 0 because LSI decodes a VRAM address present on the address bus (so a set_ab_x(addr, Z80_MREQ|Z80_WR) ;
should be done before a call to _wait()
).
That means, the emulated LSI chip would need to check first that address setting to set Z80_WAIT upon the first tick of MWR T1 and retrieves the data on the next tick (MWR T2) just after passing the _wait() call. So just setting Z80_MREQ|Z80_WR at the second tick is wrong and should be done in two parts: first setting the address bus with Z80_MREQ|Z80_WR|Z80_AS (so the system can set the Z80_WAIT if needed) then Z80_MREQ|Z80_WR|Z80_DS (so CPU can retrieve the final data given by the system when clears Z80_WAIT)
elif mcycle.type == 'mwrite':
l(f'// -- mwrite')
addr = mcycle.items['ab']
data = mcycle.items['db']
add(f'_set_ab_x({addr},Z80_MREQ|Z80_WR|Z80_AS)')
add(f'_wait();_set_db_x({data},Z80_MREQ|Z80_WR|Z80_DS);{action}')
add('')
Maybe there is a better solution than having Z80_AS and Z80_DS but I'm pretty sure Z80_WAIT is not well handled currently.
Same issue with Z80_IORQ|Z80_WR. :)
Still with SHARP MZ-700, the access to Monitor ROM ($0000-$0FFF) wants a one-wait state, so you also have the same issue here as well.
cld
ldx #$ff
txs
lda #$ff
pha
plp
brk
on visual6502, upon finishing brk insn, D is cleared, m6502 otoh has it set.
Line 863 of z80.template.h
if (nmi || (((pins & (Z80_INT|Z80_BUSREQ))==Z80_INT) && (r2 & _BIT_IFF1))) {
Can we just change to:
if (nmi || (((pins & Z80_INT)==Z80_INT) && (r2 & _BIT_IFF1))) {
or it was some idea that was not implemented yet?
For example:
0000: DD 21 EF BE LD IX,BEEF
0004: 2A AD DE LD HL,(DEAD)
Put a breakpoint on 0004 in the debugger, then single step. IX gets the value of (DEAD) instead of HL.
In z80_exec the "call track evaluation callback if set" block may get a non-zero trap_id from the trap callback and then break out of the loop, thereby skipping the following logic in the "clear state bits for next instruction" block. This causes 'map_bits' to persist the _BITS_USE_IXIY flags to the next instruction, which if you're unlucky might reference HL.
I wasn't confident the entire "clear state bits for next instruction" block should happen before the trap since the 'pins' are passed into the trap but modified by that block.
From a Twitter conversation with Petri Häkkinen:
The decimal mode test code (http://www.6502.org/tutorials/decimal_mode.html) passes when run in VICE so I assume we both have a bug :) But I got it working by following the instructions from http://6502.org exactly to the letter. Here are relevant parts of my code. Feel free to use or not use, crediting is optional :) https://dropbox.com/scl/fi/vimkrr842wia8l5v8ba0u/bcd.cpp?rlkey=wo0lwso13qqqj03a549kpq2pk&dl=0
uint8_t ra = 0;
uint8_t rx = 0;
uint8_t ry = 0;
uint8_t neg = 0;
uint8_t zero = 0;
uint8_t carry = 0;
uint8_t interrupt_disable = 1;
uint8_t decimal = 0;
uint8_t overflow = 0;
static void update_nz(uint8_t v)
{
// update flags n & z
neg = ((v & 0x80) >> 7);
zero = (v == 0);
}
static void adc(uint8_t operand)
{
if(decimal)
{
// lower nibble
uint16_t temp = (uint16_t)(ra & 0xf) + (uint16_t)(operand & 0xf) + (uint16_t)carry;
if(temp >= 0xa)
temp = ((temp + 6) & 0xf) + 0x10; // wrap and set carry for upper nibble
// upper nibble
temp = (uint16_t)(ra & 0xf0) + (uint16_t)(operand & 0xf0) + temp;
neg = ((temp & 0x80) >> 7);
overflow = (~(ra ^ operand) & (ra ^ temp) & 0x80) >> 7;
// zero flag is set according to binary version of adc!
zero = ((ra + operand + carry) & 0xff) == 0;
if(temp >= 0xa0)
temp += 0x60; // wrap
ra = (uint8_t)temp;
carry = temp >= 0x100;
}
else
{
uint16_t temp = (uint16_t)ra + (uint16_t)operand + (uint16_t)carry;
update_nz(temp);
carry = temp >= 0x100;
overflow = (~(ra ^ operand) & (ra ^ temp) & 0x80) >> 7;
ra = temp;
}
}
static void sbc(uint8_t operand)
{
if(decimal)
{
// lower nibble
uint16_t temp = (uint16_t)(ra & 0xf) - (uint16_t)(operand & 0xf) - (uint16_t)!carry;
if((int16_t)temp < 0)
temp = ((temp - 6) & 0xf) - 0x10; // wrap and clear carry for upper nibble
// upper nibble
temp = (uint16_t)(ra & 0xf0) - (uint16_t)(operand & 0xf0) + temp;
if((int16_t)temp < 0)
temp -= 0x60; // wrap
// flags are set according to binary version of sbc!
uint16_t temp_bin = (uint16_t)ra - (uint16_t)operand - (uint16_t)!carry;
update_nz((uint8_t)temp_bin);
carry = temp_bin < 0x100;
overflow = ((ra ^ operand) & (ra ^ temp_bin) & 0x80) >> 7;
ra = temp;
}
else
{
uint16_t temp = (uint16_t)ra - (uint16_t)operand - (uint16_t)!carry;
update_nz(temp);
carry = temp < 0x100;
overflow = ((ra ^ operand) & (ra ^ temp) & 0x80) >> 7;
ra = temp;
}
}
One more thing, a 65C02 produces different flags in decimal mode than NMOS 6502. I guess it’s possible mame implements this behavior but haven’t checked. The differences are detailed in the http://6502.org page.
do you have plans for adding the Z80 SIO to the list of emulated chips?
Merging the iorq() and tick() functions into one creates a new problem in some home computer emulators where register reads/writes from the CPU can happen at any time, but the chip is ticked at a slower frequency than the CPU (this happens in the CPC emulator where the CPU is running at 4 MHz, but the AY38910 and MC6845 are ticked at 1 MHz.
To fix this, support chips should have a CLK input pin, and the "per-tick actions" should only happen when this CLK pin is set.
Apparently there's a bug in the 6502's NMI detection (via Mathias Bergvall):
However, while looking at this I did find a scenario where NMI is totally ignored by the m6502 simulator. That is when NMI is asserted before the third step of an branch instruction (case (op<<3)|2).
nmi_pip is set to 1 at the start of tick() which is then right-shifted (because of "taken branch delays interrupt handling by one instruction" handling I presume) and not shifted back in at the end.
Is finishing the 1541 implementation still on the roadmap?
not really an issue, but since you mentioned the lorenz testsuite ... we have collected a whole lot of test programs for VICE here: https://sourceforge.net/p/vice-emu/code/HEAD/tree/testprogs/ (i have fixed a bunch of things in the lorenz suite for that matter, all tests pass on a real C64, and using both types of CIAs -> https://sourceforge.net/p/vice-emu/code/HEAD/tree/testprogs/general/Lorenz-2.15/ )
there is also a testbench script that can run a lot of them automatically and produce a result like here: https://vice-emu.pokefinder.org/index.php/Testbench
have fun!
what is the status of the c1530 emulation, is it working?
I'm trying to make it work with the VIC-20, but after vic20_insert_tape()
and vic20_play_tape()
the SENSE
and MOTOR
stay unchanged. I can't go beyond the PRESS PLAY ON TAPE
.
I noticed that C1530_CASPORT_MOTOR
pin is written by c1530_play()
and c1530_stop()
, but I think this should be an input pin to the tape, not output.
the floppy disc emulation doesn't take timing of 'phyisical components' into account (e.g. all seek times are 'immediate'), this causes some demos to temporarily speed up during disc loading
in the Demoizart initial part, there's some overlay scrolling text missing during the rotating cube sequence
some demos still have 'vertical' timing issues:
Demoizart 'Critical Part':
PhX: in the rotating 3D cube section, lower border:
And in the next text scroller section upper border:
in Batman Forever, subtle flickering pixel junk in text scroller parts
From Scratch: this bar on the left in the first plasma screen (can also be tweaked by moving IO reads one tick backward):
...and later From Scratch hangs completely.
I was wondering if there is any documentation on creating a new emulation? or perhaps how to change the code that is in the emulators ROM? I'm trying to modify the LC80 emulation so I can play with the interrupt handling on the CTC.
I'd also like to be able to add some 74 series chips to test out my glue.
I suppose the ultimate would be to be able to load up a KiCad Schematic and emulate that.
I was looking at the Z80 emulator (very nice!) and I think the wait state handling is incorrect, except for in opcode fetch. A memory system or IO peripheral is supposed to be able to assert wait states that are relevant to it, based on decoding address and control lines, but that wouldn't be possible in the emulator currently. Consider for example:
// ED 70: IN (C) (M:2 T:8)
// -- ioread
case 1196: goto step_next;
case 1197: goto step_next;
case 1198: _wait();_ioread(cpu->bc);goto step_next;
case 1199: cpu->dlatch=_gd();cpu->wz=cpu->bc+1;goto step_next;
// -- overlapped
case 1200: _z80_in(cpu,cpu->dlatch);goto fetch_next;
/WAIT is checked simultaneously with asserting the address and relevant control pins, so a peripheral doesn't have time to assert /WAIT based on them.
Evidence:
Regarding memory access, the Z80 manual says (page 8 in the current PDF from zilog.com):
"During T2 and every subsequent automatic WAIT state (TW), the CPU samples the WAIT line with the falling edge of the clock. If the WAIT line is active at this time, another WAIT state is entered during the following cycle." However this is oversimplified. More accurately:
(As a sanity check, I looked at the Z80180 manual as well. Fig 9-13 in the current PDF from zilog.com agree with the above, except for IO. By default the '180 has different IO timing, but a config register can make it match the Z80)
So, I think that:
A fix would require moving some steps to earlier ticks. For example, I think the code above should be more like:
// ED 70: IN (C) (M:2 T:8)
// -- ioread
case 1196: _sax(cpu0->bc,Z80_IORQ); goto step_next; // T1
case 1197: pins |= Z80_RD; goto step_next; // T2
case 1198: _wait();goto step_next; // TW
case 1199: cpu->dlatch=_gd();cpu->wz=cpu->bc+1;goto step_next; // T3
// -- overlapped
case 1200: _z80_in(cpu,cpu->dlatch);goto fetch_next;
(but probably using new macros)
Imgui 1.79 changes how the clipper API works. This breaks the disassembly and execution history views in the ui_dbg.h module.
ImGuiListClipper: Renamed constructor parameters which created an ambiguous alternative to using the ImGuiListClipper::Begin() function, with misleading edge cases. Always use ImGuiListClipper::Begin()! Kept inline redirection function (will obsolete). Note: imgui_memory_editor in version 0.40 from imgui_club used this old clipper API. Update your copy if needed.
Hello Andre,
Thanks for all your great work on the emulator, it's really impressive what you have achieved.
I just wanted to let you know that I've started work on adding full AtoMMC support to your Atom emulator, with the goal of being able to run the AtomSoftwareArchive on the web.
It's definitely work in progress, but you can try it out here:
https://hoglet67.github.io/
The first time this is loaded will be slow, as there is a ~20MB file download. This is then cached locally, so subsequent loads are much faster.
*MENU will start the Atom Software Archive MENU system (normally you would do SHIFT BREAK, but that's not currently possible in the emulator).
The current AtoMMC source is here:
https://github.com/hoglet67/chips/blob/master/chips/atommc.h
I'm struggling a bit with the build system. There are a few emcc settings than need changing, e.g.
set(EMSC_LINKER_FLAGS "${EMSC_LINKER_FLAGS} -s FORCE_FILESYSTEM=1")
set(EMSC_LINKER_FLAGS "${EMSC_LINKER_FLAGS} -s LZ4=1")
I'm not sure how to set these just for the atom and atom-ui targets. I also found the LZ4 support wasn't working with the closure compiler in the release build. And I also ended up packaging the AtomSoftwareArchive files manually.
The main issue I'm seeing with the emulator is with keyboard handing. Lots of atom games use Ctrl, Shift and Rept as these are easy to read. It seems only work when used with another key, not on their own.
On a real Atom, the following code should detect Shift and Ctrl being pressed:
DO PRINT ?#B001;UNTIL 0
Is this something that's fixable?
Anyway, this seems like a fun Christmas project, so I'll keep working on it over the next few days.
Dave
If I initialize a pixelbuffer for a c64 system with the following code:
const size_t framebuf_size = 392*272*4;
uint8_t* framebuf = malloc(framebuf_size);
desc.pixel_buffer = framebuf;
desc.pixel_buffer_size = framebuf_size;
// Initialize C64 system
c64_t sys;
c64_init(&sys, &desc);
I get this assertion:
janne@janne-X10SRA:~/dev/c64emu$ gcc c64.c -o c64 && ./c64
hello world
c64: chips/systems/c64.h:247: c64_init: Assertion `!desc->pixel_buffer || (desc->pixel_buffer_size >= ((((62)+1)*8)*((311)+1)*4))' failed.
Aborted (core dumped)
But a quick reading of the systems/c64.h docs would suggest this pixel_buffer_size would be ok:
/* video output config (if you don't want video decoding, set these to 0) */
void* pixel_buffer; /* pointer to a linear RGBA8 pixel buffer, at least 392*272*4 bytes */
int pixel_buffer_size; /* size of the pixel buffer in bytes */
i'm using your m6502 to verify accurateness of the 6502 emu i'm writing and encountered this cornercase in blargg NES cpu testsuite:
PC:0200 S:90 A:ff X:02 Y:01 nvTBdizc O:9c @shy $02fe, x
m6502 writes to 0x300 upon encountering this instruction, and i believe the original intent of the code (and also what i observe in visual6502 remix) is to write to 0x200 instead.
/* SHY abs,X (undoc) */
case (0x9C<<3)|0: _SA(c->PC++);break;
case (0x9C<<3)|1: _SA(c->PC++);c->AD=_GD();break;
case (0x9C<<3)|2: c->AD|=_GD()<<8;_SA((c->AD&0xFF00)|((c->AD+c->X)&0xFF));break;
case (0x9C<<3)|3: _SA(c->AD+c->X);_SD(c->Y&(uint8_t)((_GA()>>8)+1));_WR();break;
the 3rd cycle actually does the expected page-cross wrap, and a read happens on 0x200, however AD isn't set to this address, instead the 4th cycle uses again 0x2fe and adds X again for the write (this time without page-cross wrap), resulting in a write to 0x300.
Hi! I've experimented with your excellent chip implementations in my program, but there appears to be a bug in the ringmod handling. Example: Rambo Loader by Martin Galway, at 01:42-01:48 - The ringmod seems to be missing there. Hope you'll have the chance to look into it some day. Thank you!
maybe related : #61
after the prefetch function is called, or init/reset, the first call to cycle() results in PC increased by 1, and op_done() flagged, to actually get the first instruction executed you have to call cycle() again in a loop until op_done() and if adding up the cycles you'll end up with one too many.
here's a naive implementation that uses your z80 to emulate the behaviour of other z80 core's z80_step() function:
static unsigned csz80_step_one(csz80_state* csz) {
unsigned cyc = 0;
do {
csz->pins = csz80_tick(&csz->cpu, csz->pins);
if (csz->pins & Z80_MREQ) {
const uint16_t addr = Z80_GET_ADDR(csz->pins);
if (csz->pins & Z80_RD) {
uint8_t data = Z80_READ_BYTE(z->userdata, addr);
Z80_SET_DATA(csz->pins, data);
} else if (csz->pins & Z80_WR) {
uint8_t data = Z80_GET_DATA(csz->pins);
// do not write again, it's already done by our z80
// memory[addr] = data;
}
} else if(csz->pins & Z80_IORQ) {
const uint16_t port = Z80_GET_ADDR(csz->pins);
if (csz->pins & Z80_RD) {
//in(z, port);
} else if (csz->pins & Z80_WR) {
//out(z, port, 0);
}
}
++cyc;
} while(!csz80_opdone(&csz->cpu));
return cyc;
}
in order to get e.g. the first instruction xor a
executed, you have to call that step function twice and the total cycles will be 5 instead of 4, while the PC afterwards is 2 instead of 1.
I checked the datasheet of the MC6847 video chip, and it seems to me that some numbers do not reflect the real hardware.
The real chip draws TWO pixels per clock cycle, effectively it is like drawing ONE pixel at twice the clock speed; so I think the MC6847_TICK_HZ
constant should be doubled.
The horizontal video frame is 227.5 clock cycles, resulting in 227.5 x 2 = 455 pixels, so the constant MC6847_DISPLAY_WIDTH
should be raised from 320
to 455
.
In the software emulation the border pixels are equally divided:
#define MC6847_BORDER_PIXELS ((MC6847_DISPLAY_WIDTH-MC6847_IMAGE_WIDTH)/2)
but according to the datasheet the left border is slightly greater than the right one, as per the following numbers:
blank area: 84 pixels
left border: 59 pixels
visible area: 256 pixels
right border: 56 pixels
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.