Giter Site home page Giter Site logo

ohare93 / brain-brew Goto Github PK

View Code? Open in Web Editor NEW
84.0 4.0 5.0 386 KB

Automated Anki flashcard creation and extraction to/from Csv

License: The Unlicense

Python 99.94% Shell 0.06%
anki-flashcards data-manipulation csv-converter learning python python37 open-source brain-brew anki crowdanki

brain-brew's Introduction

Brain-Brew

Brain Brew is an open-source flashcard manipulation tool designed to allow users to convert their Anki flashcards to/from many different formats to suit their own needs. The goal is to facilitate collaboration and maximize user choice, with a powerful tool that minimizes effort. CrowdAnki Exports and Csv(s) are the only supported file types as of now, but there will be more to come.

Anki Ultimate Geography is currently the best working example of a Flashcard repo using Brain Brew 🎉 See there for inspiration!

Installation

Install the latest version of Brain Brew on PyPi.org with pip install brain-brew. Virtual environment using pipenv is recommended!

❗ See the Brain Brew Starter Project for a working clone-able Git repo. From this repo you can now create a functional Brain Brew setup automatically, with your own flashcards! Simply by running

brainbrew init [Your CrowdAnki Export Folder]

This will generate the entire working repo for you, including the recipe files, source files, and build folder. For bi-directional sync: Anki <-> Source!

See the starter repo for a step-by-step guide for all of this.

Usage

Brain Brew runs from the command line and takes a Recipe.yaml file to run.

brainbrew run source_to_anki.yaml

Full usage help text:

Brain Brew vx.y.z
usage: brainbrew [-h] {run,init} ...

Manage Flashcards by transforming them to various types.

positional arguments:
  {run,init}  Commands that can be run
    run       Run a recipe file. This will convert some data to another format, based on the instructions in the recipe file.
    init      Initialise a Brain Brew repository, using a CrowdAnki export as the base data.

optional arguments:
  -h, --help  show this help message and exit

Recipes

These are the instructions for how Brain Brew will build brew your data into another format.

What's YAML? See the current spec here.

Run a recipe with --verify or -v to confirm your recipe is valid, without actually running it. A dry run of sorts.

Tasks

A recipe is made of many individual tasks, which do specific functions. Full detailed list coming soon™️, but see the Yamale recipe schema (local file: brain_brew/schemas/recipe.yaml) in the meantime 👍

The Why

Brain Brew was made in an effort to solve some of the following issues with current collaboration of Anki Flashcards:

Sharing Personal Information or Copyrighted Material

Have some personal notes on your cards? Used some images randomly taken from the internet? That usually means you cannot share your deck entirely, without having to go to the effort of removing the offending material and/or managing two separate copies.

Having to Pick Between Source Control or Anki Editing

Putting your cards into a source control system brings a lot of benefits. You can see any changes that occur, go back in time should an mistake be discovered, and collaborate with others.

However the current tools for managing Anki cards in source control (such as Anki-DM, GenAnki, and Remote Decks) are only one way. You generate cards from a csv into a file that can only be imported into Anki. There is no way to export them back, meaning a user must manually copy their changes over, or simple not edit their cards anywhere other than in source control.

This robs the user of two important work flows:

  1. Editing/fixing cards in Anki as you review them (on desktop or mobile)
  2. The plethora of Anki add-ons that already exist that are amazingly useful. E.g: Image Occlusion, Morphman, AwesomeTTS.

A user should not have to pick between these fantastic work flows and the usage of source control to structure, manage, and share their cards.

Lack of Formatting Choice

Csvs are great for editing data, but can only go so far by themselves. Having all the data inside one csv leaves a lot to be desired and can result in eventual problems. When one gets as many columns as this (from Ultimate Geography) then it becomes a nightmare to manage:

guid Country Country:de Country:es Country:fr Country:nb "Country info" "Country info:de" "Country info:es" "Country info:fr" "Country info:nb" Capital Capital:de Capital:es Capital:fr Capital:nb "Capital info" "Capital info:de" "Capital info:es" "Capital info:fr" "Capital info:nb" "Capital hint" "Capital hint:de" "Capital hint:es" "Capital hint:fr" "Capital hint:nb" Flag "Flag similarity" "Flag similarity:de" "Flag similarity:es" "Flag similarity:fr" "Flag similarity:nb" Map tags
crr.AfnVRi England England Inglaterra Angleterre England "Constituent country of the United Kingdom." "Landesteil des Vereinigten Königreichs." "Nación constitutiva del Reino Unido." "Nation constitutive du Royaume-Uni." "Land som utgjør en del av Storbritannia." London London Londres Londres London "Not a sovereign country" "Kein souveräner Staat" "No es un país soberano" "Pas une nation souveraine" "Ikke selvstendig land" "<img src=""ug-flag-england.svg"" />" "<img src=""ug-map-england.png"" />" UG::Europe
"h<B?Kff,?3" "Ivory Coast" Elfenbeinküste "Costa de Marfil" "Côte d’Ivoire" Elfenbenskysten "Officially Côte d'Ivoire." "Offiziell Côte d'Ivoire." "Oficialmente Côte d'Ivoire." Yamoussoukro Yamoussoukro Yamusukro Yamoussoukro Yamoussoukro "While Yamoussoukro is the official capital, Abidjan is the de facto seat of government." "Yamoussoukro ist die offizielle Hauptstadt, aber Abidjan ist der Regierungssitz." "Aunque Yamusukro es la capital oficial, Abiyán es la capital de facto." "Bien que Yamoussoukro soit la capitale officielle, Abidjan est le siège du gouvernement." "Yamoussoukro er offisiell hovedstad, mens Abidjan er de facto regjeringssete." "<img src=""ug-flag-ivory_coast.svg"" />" "Ireland (orange and green flipped, wider)" "Irland (Orange und Grün vertauscht, breiter)" "Irlanda (naranja y verde intercambiados, más ancha)" "Irlande (orange et vert inversés, plus large)" "Ireland (byttet plass på oransje og grønt, bredere)" "<img src=""ug-map-ivory_coast.png"" />" "UG::Africa UG::Sovereign_State UG::West_Africa"

Then there's having too many rows in one csv for it to be properly managed.

Features of Brain Brew

Multi-directional Card Syncing

Make changes in your source file and sync those into your Anki collection.

Make changes inside Anki and pull those back into the source.

Any user of your shared deck can make a change inside Anki and at some later point export their deck (or just part of it) using CrowdAnki. Then the source file can be updated with their changes and a new CrowdAnki Export for all users to import can be generated with one run of Brain Brew.

Modular Configuration Files

Yaml config files are what drive the conversion of Brain Brew, allowing users to easily change the functionality as they wish.

- generate_guids_in_csv:
    source: src/data/words.csv
    columns: [ guid ]

- build_parts:
  - note_model_from_yaml_part:
      part_id: LL Word
      file: src/note_models/LL Word.yaml

  - headers_from_yaml_part:
      part_id: default header
      file: src/headers/default.yaml
      override:  # Optional
        deck_description_html_file: src/headers/desc.html

  - media_group_from_folder:
      part_id: all_media
      source: src/media
      recursive: true  # Optional

  - notes_from_csvs:
      part_id: english-to-danish

      note_model_mappings:
        - note_models:
            - LL Word
          columns_to_fields:  # Optional
            guid: guid
            tags: tags

            english: English
            danish: Word
            picture: Picture
            danish audio: Pronunciation (Recording and/or IPA)
      
      file_mappings:
        - file: src/data/words.csv
          note_model: LL Word
          sort_by_columns: [english]  # Optional
          reverse_sort: no  # Optional

Personal Fields

Deck managers can set specific fields to be "Personal", meaning they will not overwrite an existing value on import.

Working version currently exists, but full PR coming soon to CrowdAnki!

Extensibility and Open Source

Free for all to use, modify, or sell this product.

Further source types are relatively easy to add due to the flexible nature of the backend Instead of creating a Csv <-> CrowdAnki converter Brain Brew first goes through a middle layer called "Deck Parts". These consist of Notes, Headers, Note Models, and Media files.

Each new source type to be added to Brain Brew (such as Markdown) need only be able to convert from Deck Parts <-> itself, and suddenly it can convert to and from all existing source types!

Smart Csvs

Csvs only update the rows which have changed. Meaning a user can import a subset of their cards which have changed and still update the source file without deleting the cards they did not include.

Csv Splitting / Derivatives

Split data into multiple csvs so that your data is neatly organised however you like.

The two following csv files contain information about England, but split into different csv files:

data-main.csv
guid country flag map tags
"e+/O]%*qfk England <img src=""ug-flag-england.svg"" /> <img src=""ug-map-england.png"" /> UG::Europe
data-capital.csv
country capital capital de capital es capital fr capital nb
England London London Londres Londres London

Brain Brew can be told that data-capital is a derivative of data-main in the build config file as such:

- file: src/data/data-main.csv               # <---- Main
  note_model: Ultimate Geography
  derivatives:
    - file: src/data/data-country.csv
    - file: src/data/data-country-info.csv
    - file: src/data/data-capital.csv        # <---- Capital
    - file: src/data/data-capital-info.csv
    - file: src/data/data-capital-hint.csv
    # note_model: different_note_model
    # derivatives:
    # - file: derivative-of-a-derivative.csv
      # derivatives:
      # - file: infinite-nesting.csv
    - file: src/data/data-flag-similarity.csv

When run Brain Brew will perform the following steps for each derivative:

  1. Finds which columns in the derivative csv match the main (only country in this case)
  2. Go through each row in the derivative and find the row with matching values in the main file
  3. Add in the extra columns (capital in each language) to that matching row in the main file
Resulting csv data
guid country flag map tags capital capital de capital es capital fr capital nb
"e+/O]%*qfk England <img src=""ug-flag-england.svg"" /> <img src=""ug-map-england.png"" /> UG::Europe London London Londres Londres London
Note:
  1. Derivatives can also have derivatives.

  2. Csv splitting works in both directions, to and from csv.

  3. Derivatives can be given a Note Model, which overrides their parent's note model for all the matched rows.

See the Brain Brew Starter Project for an example of Csv Derivatives working.

brain-brew's People

Contributors

aplaice avatar dependabot[bot] avatar ohare93 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

brain-brew's Issues

UnicodeDecodeError with source_to_anki

Hi, I'm using brain-brew to build anki-ultimate-geography for my translation. I've got Python 3.7 installed, my computer is on Windows. Trying to run brain_brew recipes/source_to_anki.yaml, I always get the following error:

UnicodeDecodeError: 'charmap' codec can't decode byte 0x98 in position 71: character maps to <undefined>

And here's the rest of what the cmd says:

INFO:root:Builder file recipes/source_to_anki.yaml is ✔ good
INFO:root:Attempting to generate Guids
INFO:root:Generate guids complete
Traceback (most recent call last):
  File "c:\users\redacted\appdata\local\programs\python\python37\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "c:\users\redacted\appdata\local\programs\python\python37\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Users\redacted\.virtualenvs\anki-ultimate-geography-dldUsQk7\Scripts\brain_brew.exe\__main__.py", line 7, in <module>
  File "C:\Users\redacted\.virtualenvs\anki-ultimate-geography-dldUsQk7\lib\site-packages\brain_brew\main.py", line 25, in main
    recipe = TopLevelBuilder.parse_and_read(recipe_file_name, verify_only)
  File "C:\Users\redacted\.virtualenvs\anki-ultimate-geography-dldUsQk7\lib\site-packages\brain_brew\configuration\build_config\top_level_builder.py", line 58, in parse_and_read
    return cls.from_list(recipe_data)
  File "C:\Users\redacted\.virtualenvs\anki-ultimate-geography-dldUsQk7\lib\site-packages\brain_brew\configuration\build_config\recipe_builder.py", line 17, in from_list
    tasks = cls.read_tasks(data)
  File "C:\Users\redacted\.virtualenvs\anki-ultimate-geography-dldUsQk7\lib\site-packages\brain_brew\configuration\build_config\recipe_builder.py", line 65, in read_tasks
    task_or_tasks = [matching_task.from_repr(task_arguments)]
  File "C:\Users\redacted\.virtualenvs\anki-ultimate-geography-dldUsQk7\lib\site-packages\brain_brew\configuration\build_config\parts_builder.py", line 40, in from_repr
    return cls.from_list(data)
  File "C:\Users\redacted\.virtualenvs\anki-ultimate-geography-dldUsQk7\lib\site-packages\brain_brew\configuration\build_config\recipe_builder.py", line 17, in from_list
    tasks = cls.read_tasks(data)
  File "C:\Users\redacted\.virtualenvs\anki-ultimate-geography-dldUsQk7\lib\site-packages\brain_brew\configuration\build_config\recipe_builder.py", line 63, in read_tasks
    task_or_tasks = [matching_task.from_repr(t_arg) for t_arg in task_arguments]
  File "C:\Users\redacted\.virtualenvs\anki-ultimate-geography-dldUsQk7\lib\site-packages\brain_brew\configuration\build_config\recipe_builder.py", line 63, in <listcomp>
    task_or_tasks = [matching_task.from_repr(t_arg) for t_arg in task_arguments]
  File "C:\Users\redacted\.virtualenvs\anki-ultimate-geography-dldUsQk7\lib\site-packages\brain_brew\build_tasks\deck_parts\headers_from_yaml_part.py", line 50, in from_repr
    override=HeadersOverride.from_repr(rep.override)
  File "C:\Users\redacted\.virtualenvs\anki-ultimate-geography-dldUsQk7\lib\site-packages\brain_brew\build_tasks\overrides\headers_override.py", line 30, in from_repr
    deck_desc_html_file=HTMLFile.create_or_get(rep.deck_description_html_file)
  File "C:\Users\redacted\.virtualenvs\anki-ultimate-geography-dldUsQk7\lib\site-packages\brain_brew\representation\generic\source_file.py", line 35, in create_or_get
    file = cls.from_file_loc(location)
  File "C:\Users\redacted\.virtualenvs\anki-ultimate-geography-dldUsQk7\lib\site-packages\brain_brew\representation\generic\html_file.py", line 18, in from_file_loc
    return cls(file_loc)
  File "C:\Users\redacted\.virtualenvs\anki-ultimate-geography-dldUsQk7\lib\site-packages\brain_brew\representation\generic\html_file.py", line 14, in __init__
    self.read_file()
  File "C:\Users\redacted\.virtualenvs\anki-ultimate-geography-dldUsQk7\lib\site-packages\brain_brew\representation\generic\html_file.py", line 22, in read_file
    self._data = r.read()
  File "c:\users\redacted\appdata\local\programs\python\python37\lib\encodings\cp1250.py", line 23, in decode
    return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x98 in position 71: character maps to <undefined>

If it helps, the thing I'm trying to build is on my fork

issues importing data from csv

Hi,
I am getting some errors running the src to ank recipe on new data. have tried to add the data directly to the csv initiallzed under src/data, and also tried to set up a new receipe. My csv data is converted from tsv using a python script. The data looks ok using vim with the csv plugin. But the original tsv file will have embedded commas, extended latin characters and embedded double quotes. And since this is language / prose there is no guarantee that there will not be a stray quote. So I am suspecting that it may be a csv issue. it woujld be on my wishlist to have tsv data source which would at least eliminate any of these comma, quote problems. Otherwise here is a sample error:. I do note the message referring to tags, so I tried a file with and without any tags with the same results. Any ideas on solution appreciated.

[bkelly@toolbox Jula]$ brainbrew run recipes/headwords-5.yaml 
INFO:root:Builder file recipes/headwords-5.yaml is ✔ good
Traceback (most recent call last):
  File "/var/home/bkelly/.local/bin/brainbrew", line 33, in <module>
    sys.exit(load_entry_point('Brain-Brew==0.3.5', 'console_scripts', 'brainbrew')())
  File "/var/home/bkelly/.local/lib/python3.9/site-packages/brain_brew/main.py", line 19, in main
    command.execute()
  File "/var/home/bkelly/.local/lib/python3.9/site-packages/brain_brew/commands/run_recipe/run_recipe.py", line 15, in execute
    recipe = TopLevelBuilder.parse_and_read(self.recipe_file_name, self.verify_only)
  File "/var/home/bkelly/.local/lib/python3.9/site-packages/brain_brew/commands/run_recipe/top_level_builder.py", line 60, in parse_and_read
    return cls.from_list(recipe_data)
  File "/var/home/bkelly/.local/lib/python3.9/site-packages/brain_brew/commands/run_recipe/recipe_builder.py", line 20, in from_list
    tasks = cls.read_tasks(data)
  File "/var/home/bkelly/.local/lib/python3.9/site-packages/brain_brew/commands/run_recipe/recipe_builder.py", line 68, in read_tasks
    task_or_tasks = [matching_task.from_repr(task_arguments)]
  File "/var/home/bkelly/.local/lib/python3.9/site-packages/brain_brew/commands/run_recipe/parts_builder.py", line 40, in from_repr
    return cls.from_list(data)
  File "/var/home/bkelly/.local/lib/python3.9/site-packages/brain_brew/commands/run_recipe/recipe_builder.py", line 20, in from_list
    tasks = cls.read_tasks(data)
  File "/var/home/bkelly/.local/lib/python3.9/site-packages/brain_brew/commands/run_recipe/recipe_builder.py", line 73, in read_tasks
    inner_task.execute()
  File "/var/home/bkelly/.local/lib/python3.9/site-packages/brain_brew/build_tasks/csvs/notes_from_csvs.py", line 74, in execute
    notes_part: List[Note] = [self.csv_row_to_note(row, self.note_model_mappings) for row in csv_rows]
  File "/var/home/bkelly/.local/lib/python3.9/site-packages/brain_brew/build_tasks/csvs/notes_from_csvs.py", line 74, in <listcomp>
    notes_part: List[Note] = [self.csv_row_to_note(row, self.note_model_mappings) for row in csv_rows]
  File "/var/home/bkelly/.local/lib/python3.9/site-packages/brain_brew/build_tasks/csvs/notes_from_csvs.py", line 87, in csv_row_to_note
    tags = split_tags(filtered_fields.pop("tags"))
  File "/var/home/bkelly/.local/lib/python3.9/site-packages/brain_brew/utils.py", line 83, in split_tags
    split = [entry.strip() for entry in re.split(r';\s*|,\s*|\s+', tags_value)]
  File "/usr/lib64/python3.9/re.py", line 231, in split
    return _compile(pattern, flags).split(string, maxsplit)
TypeError: expected string or bytes-like object

Single sample row being imported:

6084b28d-1134-4ecf-bc77-ec8ff4e5ba36,fɔ, ,,dire; jouer (d'un instrument),,,,,,,,,,,, ,,,,,,,,,Musa k'a fɔ n ye.,Moussa me l'a dit.,Musa k'a fɔ n ye ko Fanta be na.,Musa m'a dit que Fanta viendra.,mɔgɔw tun be dundun fɔla.,les gens jouaient du tam-tam.,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, ,


Composable, extendable and customisable decks

Hey there, thanks for the update! Looks like you're making good progress. 🚀

I've been thinking about Brain Brew and what it could bring to Ultimate Geography in the long term, so I thought I'd share some long-term ideas. (@aplaice, this might interest you as well.)


The problem

Over the years, users of UG have asked for a number of extra fields to be added to the deck -- most notably IPA and audio pronunciation, but also languages, currencies, flag descriptions (vixillology), links to Wikipedia, etc.

These requests have so far been rejected or put on hold for various reasons, ranging from lack of user interest to maintenance burden (single CSV, long-term support, etc.)

IMO, the key to maintaining a good-quality open source project in the long term is to avoid scope creep - i.e. doing a few things really well rather than doing every possible thing poorly. So I've always been careful about keeping the deck as simple and uncluttered as possible. I prefer to have a vast majority of really happy users rather than to try-but-kinda-fail to satisfy everyone.

Brain Brew will obviously solve the CSV limitation, but the other issues remain... If a field has strong user interest and people are willing to maintain it in the long term, it would make sense to include it in the deck's repository, but then including the field in one or more note templates can still be problematic as it impacts every user. And this still leaves fields that don't have strong user interest or guaranteed long-term support...

If I understand correctly, the Personal Fields feature you've been working on will at least let users add more content to their deck while still allowing them to re-export or re-import it. This is a great step forward, but eventually, we'll need a leap, and I think Brain Brew can make that leap.

What we need

What we need is a way to make decks fully composable, extendable and customisable.

Users need the ability to compose, extend and customise a core deck in whichever way they want without having to actually modify that deck.

Let me start by explaining what I mean by those terms.

Composable

This is about allowing users to create, modify and remove templates from a deck.

A basic example of composability is UG's extended deck (confusing... 😄), which combines fields in two additional templates.

But there are plenty more use cases for composability. Who knows, perhaps somebody would like to have a template with capitals on the front and flags on the back? 🤷 Or perhaps they're sick of seeing Flag similarities and want to remove the field from the flag templates. Or perhaps they'd like to turn the Country info field into a hint field?

Going even further, perhaps a user has extended the deck with another field (see next section) and they'd like to create a template with this field, or modify one of the templates to include the field as additional information?

Extendable

This is about adding new content - i.e. fields, for instance currencies or IPA. Pretty straightforward.

Customisable

This is about customising the styles of the templates (font, color, spacing, etc.) People may find some fonts easier to read, or they may want to add icons to help them understand the cards faster.


A note on translated decks

Translated decks are perfect examples of both extendability and composability: new fields that are combined into new templates. (Technically, the templates in UG are the same for all languages, but that's just because Anki DM doesn't support translating templates.)


How do we get there?

Extension repositories

The way I see it, the first thing we need is a way to store "extended" fields in separate repositories. Yes, perhaps even translated fields!

The advantage of having "extension" repositories separate from the main deck's repository is clear: each "extension" repository would be free to live its life, to have its own maintainers, its own opinions (e.g. stylistic choices, simplified vs traditional Chinese characters, etc.), and so on. If an extension repository were to no longer be maintained or to become too opinionated, people could fork it, create an alternative version of it, or simply stop using it. Either way, the core deck would remain unimpacted.

Deck configuration

Then we need a deck configuration schema that allows:

  • referencing a core deck's repository,
  • referencing any number of extension repositories,
  • adding/removing/replacing the core deck's templates (either with custom templates or with templates provided by an extension repository),
  • customising the core deck's styles.

Ideally, the configuration schema would be "Brain Brew-agnostic", in the sense that it would not contain configuration specific to Brain Brew. Some sort of high-level format that could be processed by any tool.

People would share their deck configurations in their own repos, in Gists, or whatever.

Brain Brew support

Obviously Brain Brew would need to support this deck configuration format. 😄

Tooling

Users would then need an easy way to generate decks from shared configurations.

Installing Python, setting up Brain Brew, copying a deck configuration file, running the build command, importing in Anki, etc. It's way too much for most users. We need tooling to help, perhaps a desktop app. That's long, long term... 😄


What do you think?

First, what do you think about the general idea/plan/whatever the hell this was...? 🤣 Am I making any sense? Do you think extension repositories are the way forward? It felt good to try to identify which issues extendability, composability and customisability could resolve for us deck maintainers...

Obviously, this whole thing makes no sense if Brain Brew can't support it. It's already built for composability, so I guess the complexity is more around extendability. Do you think referencing and pulling content from other repositories is doable? I mean, the idea would really be for Brain Brew to run potentially outside of the core deck, in a folder without any content but a configuration file. Is this even plausible?

UnicodeDecodeError in UG

I'm trying to run the source_to_anki.yml recipe in UG, but I'm running into this error:

UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 5270: character maps to <undefined>

Here a few lines from the stack trace:

  File "c:\users\[...]\lib\site-packages\brain_brew\representation\generic\csv_file.py", line 34, in read_file
    self.column_headers = list_of_str_to_lowercase(csv_reader.fieldnames)
  File "C:\Program Files\Python37\lib\csv.py", line 98, in fieldnames
    self._fieldnames = next(self.reader)
  File "C:\Program Files\Python37\lib\encodings\cp1252.py", line 23, in decode
    return codecs.charmap_decode(input,self.errors,decoding_table)[0]

Does it ring a bell?

Feedback from UG set-up

Alright, so here is all of my feedback from your awesome move-to-brain-brew set-up for UG. Sorry if this is a bit raw and disorganised. 😅

  1. builder_files - I know the naming of this folder is not tied to Brain Brew, but I feel like scripts would be a better name. The way I see it, the YAML files in this folder are "scripts" that define "tasks".

  2. brain_brew_config.yaml - very minor, but what would you think of calling this file brain-brew.config.yaml?

    • I come from a web front-end background, and that's how config files are usually named (e.g. webpack.config.js, postcss.config.js, etc.)
    • The Python package is called brain-brew, so I think this should be consistent (also I'm more of a kebab-case than snake_case kind of guy 😄).
    • All that being said, I'm not familiar with Python naming conventions, so I could be way off.
  3. generate syntax - in the YAML builder files (scripts?), I find the generate syntax a bit confusing, as it encompasses both read and write operations (sometimes both in the same task, with save_to_file).

    I feel like having separate, independent read (i.e. parsing a file/folder) and write (i.e. writing to a file/folder) tasks would make things a lot clearer. Here is how I see it:

    • In a read operation, you describe:
      • the file/folder name of the resource you want to parse,
      • its type (i.e. CSV, CrowdAnki JSON, YAML deck part, etc.),
      • the "deck parts" you want to parse from it, to which you give unique names (following a naming convention of sorts).
    • In a write operation, you describe:
      • the destination file/folder of the resource you want to write,
      • its type,
      • the named deck parts declared in the script's read tasks and how they are to be used in the destination resource.

    This is a huge simplification of the problem, but do you see where I'm getting at?

  4. Deck parts is what you call the various pieces of a deck's metadata, right? Would you be able to write down a list somewhere of all the deck parts that you have defined/identified, just so we're clear?

  5. Perhaps it's your plan, but I feel like some of the deck parts could be split up even more so they could live in separate files like in AnkiDM -- I liked how templates and the deck's description lived in HTML files, the styles in a CSS file, etc.

Note Model Required Fields should be generated

Note Models have a req dictionary that states which fields are required to be filled in to generate each Card Template. See here for a full explanation: https://github.com/ankidroid/Anki-Android/wiki/Database-Structure

This is a very long and useless dictionary for the purposes of having a Note Models in source control, and we should be able to just regenerate it when writing to CrowdAnki. Find where and how it is generated in Anki itself, and replicate that logic.

Example Note Model Yaml:

name: LL Noun
id: 3cc64d88-e410-11e9-960e-d8cb8ac9abf0
css: ".card {\n font-family: arial;\n font-size: 20px;\n text-align: center;\n color:\
  \ black;\n background-color: white;\n}\n\n.card1,.card3 { background-color: #B60F2D;\
  \ }\n.card5 { background: linear-gradient(90deg, #999999 0%, #B60F2D 20%, #B60F2D\
  \ 80%, #999999 100%); }\n.card6 { background: linear-gradient(90deg, #999999 0%,\
  \ #2E9017 20%, #2E9017 80%, #999999 100%); }\n.card2,.card4 { background-color:\
  \ #2E9017; }\n.card7 { background: linear-gradient(90deg, #B60F2D 49.9%, #2E9017\
  \ 50.1%); }\n\n.nightMode.card1,.nightMode.card3 { background-color: #700; }\n.nightMode.card5\
  \ { background: linear-gradient(90deg, #999999 0%, #700 20%, #700 80%, #999999 100%);\
  \ }\n.nightMode.card6 { background: linear-gradient(90deg, #999999 0%, #250 20%,\
  \ #250 80%, #999999 100%); }\n.nightMode.card2,.nightMode.card4 { background-color:\
  \ #250; }\n.nightMode.card7 { background: linear-gradient(90deg, #700 49.9%, #250\
  \ 50.1%); }\n\n\n.word {\n font-size:1.5em;\n}\n\n.pronunciation{\n color:blue;\n\
  }\n\n.extrainfo{\n color:lightgrey;\n}"
fields:
- name: Word
  font_size: 12
- name: X Word
- name: Y Word
- name: Picture
  font_size: 6
- name: Extra
- name: X Pronunciation (Recording and/or IPA)
- name: Y Pronunciation (Recording and/or IPA)
- name: Plural
- name: Indefinite Plural
- name: Definite Plural
- name: MorphMan_FocusMorph
templates:
- name: X Comprehension
  question_format: "{{#X Word}}\n\t<span class=\"word\">{{text:X Word}}</span>\n{{/X\
    \ Word}}"
  answer_format: "{{#X Word}}\n\t<span class=\"word\">{{X Word}}</span>\n{{/X Word}}\n\
    \n<hr id=answer>\n\n{{Picture}}\n\n{{#X Pronunciation (Recording and/or IPA)}}\n\
    \t<br><span class=\"pronunciation\">{{X Pronunciation (Recording and/or IPA)}}</span>\n\
    {{/X Pronunciation (Recording and/or IPA)}}\n\n<br>\n{{#Extra}}\n\t<br><span class=\"\
    extrainfo\">{{Extra}}</span>\n{{/Extra}}"
- name: Y Comprehension
  question_format: "{{#Y Word}}\n\t<span class=\"word\">{{text:Y Word}}</span>\n{{/Y\
    \ Word}}"
  answer_format: "{{#Y Word}}\n\t <span class=\"word\">{{Y Word}}</span>\n{{/Y Word}}\n\
    \n<hr id=answer>\n\n{{Picture}}\n\n{{#Y Pronunciation (Recording and/or IPA)}}\n\
    \t<br><span class=\"pronunciation\">{{Y Pronunciation (Recording and/or IPA)}}</span>\n\
    {{/Y Pronunciation (Recording and/or IPA)}}\n\n<br>\n{{#Extra}}\n\t<br><span class=\"\
    extrainfo\">{{Extra}}</span>\n{{/Extra}}"
- name: X Production
  question_format: "{{#X Word}}{{#Picture}}\n\t{{Picture}}\n{{/Picture}}{{/X Word}}"
  answer_format: "{{FrontSide}}\n\n<hr id=answer>\n\n<span class=\"word\">{{X Word}}</span>\n\
    \n{{#X Pronunciation (Recording and/or IPA)}}\n\t<br><span class=\"pronunciation\"\
    >{{X Pronunciation (Recording and/or IPA)}}</span>\n{{/X Pronunciation (Recording\
    \ and/or IPA)}}\n\n<br>\n{{#Extra}}\n\t<br><span class=\"extrainfo\">{{Extra}}</span>\n\
    {{/Extra}}"
- name: Y Production
  question_format: "{{#Y Word}}{{#Picture}}\n\t{{Picture}}\n{{/Picture}}{{/Y Word}}"
  answer_format: "{{FrontSide}}\n\n<hr id=answer>\n\n <span class=\"word\">{{Y Word}}</span>\n\
    \n{{#Y Pronunciation (Recording and/or IPA)}}\n\t<br><span class=\"pronunciation\"\
    >{{Y Pronunciation (Recording and/or IPA)}}</span>\n{{/Y Pronunciation (Recording\
    \ and/or IPA)}}\n\n<br>\n{{#Extra}}\n\t<br><span class=\"extrainfo\">{{Extra}}</span>\n\
    {{/Extra}}"
- name: X Spelling
  question_format: "{{#X Word}}\n\t<br>Spell this word:<br>\n\n\t<span class=\"word\"\
    >{{type:X Word}}</span>\n\n\t<br>{{Picture}}\n{{/X Word}}"
  answer_format: "{{FrontSide}}\n\n{{#X Pronunciation (Recording and/or IPA)}}\n\t\
    <br><span class=\"pronunciation\">{{X Pronunciation (Recording and/or IPA)}}</span>\n\
    {{/X Pronunciation (Recording and/or IPA)}}\n\n<br>\n{{#Extra}}\n\t<br><span class=\"\
    extrainfo\">{{Extra}}</span>\n{{/Extra}}"
- name: Y Spelling
  question_format: "{{#Y Word}}\n\t<br>Spell this word:<br>\n\n\t<span class=\"word\"\
    >{{type:Y Word}}</span>\n\n\t<br>{{Picture}}\n{{/Y Word}}"
  answer_format: "{{FrontSide}}\n\n{{#Y Pronunciation (Recording and/or IPA)}}\n\t\
    <br><span class=\"pronunciation\">{{Y Pronunciation (Recording and/or IPA)}}</span>\n\
    {{/Y Pronunciation (Recording and/or IPA)}}\n\n<br>\n{{#Extra}}\n\t<br><span class=\"\
    extrainfo\">{{Extra}}</span>\n{{/Extra}}"
- name: X and Y Production
  question_format: "{{#X Word}}\n{{#Y Word}}\n\t{{Picture}}\n{{/Y Word}}\n{{/X Word}}\n"
  answer_format: "{{FrontSide}}\n\n<hr id=answer>\n\n<div class=\"word\">{{text:X\
    \ Word}}</div>\n<div class=\"word\">{{text:Y Word}}</div>\n\n{{#X Pronunciation\
    \ (Recording and/or IPA)}}\n\t<br><span class=\"pronunciation\">{{X Pronunciation\
    \ (Recording and/or IPA)}}</span>\n{{/X Pronunciation (Recording and/or IPA)}}\n\
    \n{{#Y Pronunciation (Recording and/or IPA)}}\n\t<br><span class=\"pronunciation\"\
    >{{Y Pronunciation (Recording and/or IPA)}}</span>\n{{/Y Pronunciation (Recording\
    \ and/or IPA)}}\n\n<br>\n{{#Extra}}\n\t<br><span class=\"extrainfo\">{{Extra}}</span>\n\
    {{/Extra}}"
tags:
- LL::Grammar::Noun
required_fields_per_template:
- - 0
  - any
  - - 1
- - 1
  - any
  - - 2
- - 2
  - all
  - - 1
    - 3
- - 3
  - all
  - - 2
    - 3
- - 4
  - all
  - - 1
    - 3
- - 5
  - all
  - - 2
    - 3
- - 6
  - all
  - - 1
    - 2
    - 3

Note Model Generation

Need to be able to generate Note Models with a script, so that replacements can be automated and many models can share the same body, but with different Card Templates / Fields / Text / etc.

The issue is that I also want to maintain the dual multi directional sync between Anki that Brain Brew has now. Change it in a Yaml file, then sync those changes to Anki. Change it in Anki and sync those changes to the Yaml file.

The 3 types of files:

  • Generation files
  • Note Models in Yaml
  • Note Models in Anki

I have come to the conclusion that this is not possible, in the way a user would want. The issue here is that a user will edit an individual Card Template in either Anki or Yaml and want that change to be applied to the entire list. But this requires the changes in the Yaml file to be synced back into the Generation file. At present moment I don't think that's feasible.

If this feature is made now, I imagine it will be a one way process from Generation -> Yaml but Yaml <-> Anki is fine. Then there can be a verification check command to confirm to be run after Anki -> Yaml that confirms the Generation file creates the same result as the Yaml file. Otherwise the user should update it manually and check the verification after that.

Open to suggestions.

Conflict with assetmanger plugin??

I was wanting to take a look at this but was stopped at initializing a crowdanki repo with the error:

=== snip
 File "/var/home/bkelly/.local/lib/python3.9/site-packages/brain_brew/representation/json/crowd_anki_export.py", line 60, in _read_json_file
    self.note_models = list(map(NoteModel.from_crowdanki, self.json_data.note_models))
  File "/var/home/bkelly/.local/lib/python3.9/site-packages/brain_brew/representation/yaml/note_model.py", line 161, in from_crowdanki
    ca: cls.CrowdAnki = data if isinstance(data, cls.CrowdAnki) else cls.CrowdAnki.from_dict(data)
  File "/var/home/bkelly/.local/lib/python3.9/site-packages/brain_brew/configuration/representation_base.py", line 6, in from_dict
    return cls(**data)  # noqa
TypeError: __init__() got an unexpected keyword argument 'assetManager'

This is obviously the plugin:
https://github.com/hgiesel/anki_asset_manager

Since I have a bunch of card types, I think I can no longer go back from using asset manager. But maybe this is something that can be ignored on your end. Otherwise I have been semi automatically syncing my file to a google spreadsheet via tsv export, awk scripts, and google-drive sync. Is there a way to get started with brain-view and a tsv file?

Filenames that are too long are not found

E.g: "empty-room-interior-design-open-space-big-panoramic-window-balcony-sea-view-parquet-wooden-floor-modern-contemporary-141777978.jpg"

os.walk does not seem to find files of this length. Finding a solution would be good, but for the meantime I've just manually changed my files with this.

Checking the number of rows in all columns is same (or less-than-or-equals?) as number of headers (per file) (error-reporting)

The user should have CSV files which have all columns labeled (i.e. no row has more columns than there are headers in the given file). If this is the case, then the error described here won't occur. (i.e. I'm not describing a bug in BrainBrew — if the inputs are correct, then everything is fine — just sub-optimal error-reporting.)

Unfortunately, while this will usually be the case, it won't always be, as the user might add some text to the right of the rightmost column (as temporary annotation) or forget to add the relevant column header. If it isn't the case, then a rather confusing message is displayed (e.g. see here):

$ pipenv run build
INFO:root:Builder file recipes/source_to_anki.yaml is ✔ good
INFO:root:Attempting to generate Guids
INFO:root:Generate guids complete
Traceback (most recent call last):
  File "/home/munksgaard/.local/share/virtualenvs/ultimate-geography-wQmoXBY1/bin/brain_brew", line 8, in <module>
    sys.exit(main())
  File "/home/munksgaard/.local/share/virtualenvs/ultimate-geography-wQmoXBY1/lib/python3.10/site-packages/brain_brew/main.py", line 19, in main
    command.execute()
  File "/home/munksgaard/.local/share/virtualenvs/ultimate-geography-wQmoXBY1/lib/python3.10/site-packages/brain_brew/commands/run_recipe/run_recipe.py", line 15, in execute
    recipe = TopLevelBuilder.parse_and_read(self.recipe_file_name, self.verify_only)
  File "/home/munksgaard/.local/share/virtualenvs/ultimate-geography-wQmoXBY1/lib/python3.10/site-packages/brain_brew/commands/run_recipe/top_level_builder.py", line 60, in parse_and_read
    return cls.from_list(recipe_data)
  File "/home/munksgaard/.local/share/virtualenvs/ultimate-geography-wQmoXBY1/lib/python3.10/site-packages/brain_brew/commands/run_recipe/recipe_builder.py", line 20, in from_list
    tasks = cls.read_tasks(data)
  File "/home/munksgaard/.local/share/virtualenvs/ultimate-geography-wQmoXBY1/lib/python3.10/site-packages/brain_brew/commands/run_recipe/recipe_builder.py", line 68, in read_tasks
    task_or_tasks = [matching_task.from_repr(task_arguments)]
  File "/home/munksgaard/.local/share/virtualenvs/ultimate-geography-wQmoXBY1/lib/python3.10/site-packages/brain_brew/commands/run_recipe/parts_builder.py", line 40, in from_repr
    return cls.from_list(data)
  File "/home/munksgaard/.local/share/virtualenvs/ultimate-geography-wQmoXBY1/lib/python3.10/site-packages/brain_brew/commands/run_recipe/recipe_builder.py", line 20, in from_list
    tasks = cls.read_tasks(data)
  File "/home/munksgaard/.local/share/virtualenvs/ultimate-geography-wQmoXBY1/lib/python3.10/site-packages/brain_brew/commands/run_recipe/recipe_builder.py", line 68, in read_tasks
    task_or_tasks = [matching_task.from_repr(task_arguments)]
  File "/home/munksgaard/.local/share/virtualenvs/ultimate-geography-wQmoXBY1/lib/python3.10/site-packages/brain_brew/build_tasks/csvs/notes_from_csvs.py", line 56, in from_repr
    file_mappings=rep.get_file_mappings(),
  File "/home/munksgaard/.local/share/virtualenvs/ultimate-geography-wQmoXBY1/lib/python3.10/site-packages/brain_brew/build_tasks/csvs/shared_base_csvs.py", line 22, in get_file_mappings
    return list(map(FileMapping.from_repr, self.file_mappings))
  File "/home/munksgaard/.local/share/virtualenvs/ultimate-geography-wQmoXBY1/lib/python3.10/site-packages/brain_brew/transformers/file_mapping.py", line 76, in from_repr
    derivatives=list(map(cls.from_repr, rep.derivatives)) if rep.derivatives is not None else [],
  File "/home/munksgaard/.local/share/virtualenvs/ultimate-geography-wQmoXBY1/lib/python3.10/site-packages/brain_brew/transformers/file_mapping.py", line 68, in from_repr
    csv.read_file()
  File "/home/munksgaard/.local/share/virtualenvs/ultimate-geography-wQmoXBY1/lib/python3.10/site-packages/brain_brew/representation/generic/csv_file.py", line 47, in read_file
    self._data.append({key.lower(): row[key] for key in row})
  File "/home/munksgaard/.local/share/virtualenvs/ultimate-geography-wQmoXBY1/lib/python3.10/site-packages/brain_brew/representation/generic/csv_file.py", line 47, in <dictcomp>
    self._data.append({key.lower(): row[key] for key in row})
AttributeError: 'NoneType' object has no attribute 'lower'

It's possible to track down the source of the error with a small amount of debugging, but it's not very user-friendly.

If BrainBrew checked that there are no rows with too many columns (i.e. more columns than there are headers in the given file), then I believe that this class of issues could be caught with nicer warning messages.

This is obviously not very urgent/important and I haven't really thought about whether there are better solutions/whether my proposed solution would not slow BB down etc. — I'm mainly writing this down here for reference! :)

html_file.py cannot read utf-8 file in chinese windows

Hi, I'm use brain-brew to build ultimate-geography for my translation.

brain_brew\representation\generic\html_file.py
If the html file contains non-English character, it cannot be read
But, I can read html with GBK properly.
I think it isn't a universal way for other contributors, who use English and its similar languages OS.
They may also convert this file from GBK to their OS's local language(UTF-8?).
And my test file as follow:
headers.zip

Error message with build ultimate-geography

INFO:root:Builder file recipes/source_to_anki.yaml is ✔ good
INFO:root:Attempting to generate Guids
INFO:root:Generate guids complete
Traceback (most recent call last):
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\Scripts\brain_brew-script.py",
>
    load_entry_point('Brain-Brew==0.3.2', 'console_scripts', 'brain_brew')()
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\brain_brew-0
_brew\main.py", line 25, in main
    recipe = TopLevelBuilder.parse_and_read(recipe_file_name, verify_only)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\brain_brew-0
_brew\configuration\build_config\top_level_builder.py", line 58, in parse_and_read
    return cls.from_list(recipe_data)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\brain_brew-0
_brew\configuration\build_config\recipe_builder.py", line 17, in from_list
    tasks = cls.read_tasks(data)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\brain_brew-0
_brew\configuration\build_config\recipe_builder.py", line 65, in read_tasks
    task_or_tasks = [matching_task.from_repr(task_arguments)]
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\brain_brew-0
_brew\configuration\build_config\parts_builder.py", line 40, in from_repr
    return cls.from_list(data)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\brain_brew-0
_brew\configuration\build_config\recipe_builder.py", line 17, in from_list
    tasks = cls.read_tasks(data)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\brain_brew-0
_brew\configuration\build_config\recipe_builder.py", line 63, in read_tasks
    task_or_tasks = [matching_task.from_repr(t_arg) for t_arg in task_arguments]
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\brain_brew-0
_brew\configuration\build_config\recipe_builder.py", line 63, in <listcomp>
    task_or_tasks = [matching_task.from_repr(t_arg) for t_arg in task_arguments]
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\brain_brew-0
_brew\build_tasks\deck_parts\headers_from_yaml_part.py", line 50, in from_repr
    override=HeadersOverride.from_repr(rep.override)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\brain_brew-0
_brew\build_tasks\overrides\headers_override.py", line 30, in from_repr
    deck_desc_html_file=HTMLFile.create_or_get(rep.deck_description_html_file)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\brain_brew-0
_brew\representation\generic\source_file.py", line 35, in create_or_get
    file = cls.from_file_loc(location)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\brain_brew-0
_brew\representation\generic\html_file.py", line 18, in from_file_loc
    return cls(file_loc)
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\brain_brew-0
_brew\representation\generic\html_file.py", line 14, in __init__
    self.read_file()
  File "C:\Users\Administrator\AppData\Local\Programs\Python\Python36\lib\site-packages\brain_brew-0
_brew\representation\generic\html_file.py", line 22, in read_file
    self._data = r.read()
UnicodeDecodeError: 'gbk' codec can't decode byte 0xa1 in position 157: illegal multibyte sequence

I found that csv_file.py support utf-8 format.

CrowdAnki Subdecks are not Supported Fully

CrowdAnki exports can contain a subdeck key in the top level, which is recursive of all the decks below this deck, etc.

Brain Brew does not do anything with this. No idea how difficult this would be to implement.

Media Filenames and Cleanup

Anki shortens filenames that are too long in the latest versions, but only when a user runs Check Media. If they have already created a repo using Brain Brew then their old media will still be there. Seems to me there are a few issues here:

Warn users about long filenames

Make sure people are not caught out by this by detecting when a media filename is too long. Then users know what to do after that, and can delete the redundant files manually themselves if they wish to.

Find non-referenced media

There should be a keyword that can be run in a builder file where Brain Brew can check Deck Part Notes for their media references and then compare that to a specified folder. Deleting those that are not found, or perhaps moving them somewhere else for user confirmation?

Rename files

Long filenames is an issue on it's own, but tbh some filenames are just shit. Users should have the option to rename references media files found in a Note to the field / column that the media is found in. Adding a # for each after the 1st. One issue there is what if the media is references in multiple notes? Just use the first one found?

Windows EOL in HTML not supported

I am currently trying out a similar integration to Ultimate Geography with one of my decks. However, I am facing the issue that I am working on Windows and the HTML exports from CrowdAnki have CRLF EOL. Thus, I had to patch my brainbrew locally, namely had to change the regex

html_separator_regex = r'[\n]{1,}[-]{1,}[\n]{1,}'
to accept [\r\n]s instead of [\n]. This is actually making it be in sync with utils.filename_from_full_path and utils.folder_name_from_full_path.

I haven't found contribution guidelines to this repo, I could prepare a PR from a fork, if it would be wanted.

Sync to yaml file

Hi there,
Is it (or will it) be possible to sync between crowdanki and a yaml source file? Or create a recipie between the tsv/csv file and a yaml file? If so is there any basic example to check? Thanks for the still awesome tool!

AttributeError: 'CrowdAnkiExport' object has no attribute 'note_models'

When cloning the starter project, deleting build/, and trying to build it, I get

Traceback (most recent call last):
  File "/home/langston/.local/share/virtualenvs/anki-nyc-w_4Oah3X/bin/brainbrew", line 8, in <module>
    sys.exit(main())
  File "/home/langston/.local/share/virtualenvs/anki-nyc-w_4Oah3X/lib/python3.9/site-packages/brain_brew/main.py", line 19, in main
    command.execute()
  File "/home/langston/.local/share/virtualenvs/anki-nyc-w_4Oah3X/lib/python3.9/site-packages/brain_brew/commands/run_recipe/run_recipe.py", line 15, in execute
    recipe = TopLevelBuilder.parse_and_read(self.recipe_file_name, self.verify_only)
  File "/home/langston/.local/share/virtualenvs/anki-nyc-w_4Oah3X/lib/python3.9/site-packages/brain_brew/commands/run_recipe/top_level_builder.py", line 60, in parse_and_read
    return cls.from_list(recipe_data)
  File "/home/langston/.local/share/virtualenvs/anki-nyc-w_4Oah3X/lib/python3.9/site-packages/brain_brew/commands/run_recipe/recipe_builder.py", line 20, in from_list
    tasks = cls.read_tasks(data)
  File "/home/langston/.local/share/virtualenvs/anki-nyc-w_4Oah3X/lib/python3.9/site-packages/brain_brew/commands/run_recipe/recipe_builder.py", line 68, in read_tasks
    task_or_tasks = [matching_task.from_repr(task_arguments)]
  File "/home/langston/.local/share/virtualenvs/anki-nyc-w_4Oah3X/lib/python3.9/site-packages/brain_brew/commands/run_recipe/parts_builder.py", line 40, in from_repr
    return cls.from_list(data)
  File "/home/langston/.local/share/virtualenvs/anki-nyc-w_4Oah3X/lib/python3.9/site-packages/brain_brew/commands/run_recipe/recipe_builder.py", line 20, in from_list
    tasks = cls.read_tasks(data)
  File "/home/langston/.local/share/virtualenvs/anki-nyc-w_4Oah3X/lib/python3.9/site-packages/brain_brew/commands/run_recipe/recipe_builder.py", line 73, in read_tasks
    inner_task.execute()
  File "/home/langston/.local/share/virtualenvs/anki-nyc-w_4Oah3X/lib/python3.9/site-packages/brain_brew/build_tasks/crowd_anki/notes_from_crowd_anki.py", line 62, in execute
    ca_models = self.ca_export.note_models
AttributeError: 'CrowdAnkiExport' object has no attribute 'note_models'

System info:

  • Python 3.9.6
  • NixOS 21.05
  • pipenv, version 2020.11.15

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.