Anki is a smart cross platform flashcard program. I learned about it around when I started studying Japanese on WaniKani, which employs a similar spaced repetition strategy to Anki’s. I finally starting using Anki in 2016, but I should have been using it my whole life! It can be clunky, but it works. I am now transitioning to recording anything I want to remember in a deck in addition to or even instead of in a curated file or notebook I might realistically never remember to review.
Some of my Anki decks are here, mostly for version control. I have not yet found the time and strategy to curate them for collaboration. They contain a combination of decks downloaded from Anki’s shared decks catalog, some of which I have modified, and decks I created myself. They are mostly not cohesive or exhaustive but rather reflect my use of Anki in lieu of jotting down on paper things I want to remember.
The folders in this repository each represent one deck, a somewhat confusing term since Anki decks are heirarchical and can contain subdecks. I export the decks both in Anki’s plain text format as well as a JSON format generated by @Stvad's plugin CrowdAnki.
I still need to work on my system for what Anki calls “note types,” including the code that formats the cards for display. Shared decks come with their own display code built in, but I try to make my decks consistent. I usually convert them to a note type I already made or edit the look. I have only poked at that goal periodically, though. It would help if Anki shared decks were either just the content or it were more separated. I have already accidentally overwritten my custom code by reimporting a shared deck. The system is a bit tough to get right. So till I get that straightened out better, here are a few examples of how I try to get my cards to look.
I tend to copy data into a text editor and manipulate it into tab separated fields that I can import into Anki. Here is an example of doing that for a Unix utility manpage. The regular expression find-replace command is at the bottom. (Given how many of these I have done now, I really should have automated this.)
For larger datasets, I have combined shell and Python scripting. I scraped almost 10,000 HTML pages on a particular website to create nearly 20,000 Japanese language cards complete with audio.