dbosk / examgen Goto Github PK
View Code? Open in Web Editor NEWAn exam generator for the LaTeX exam document class
License: MIT License
An exam generator for the LaTeX exam document class
License: MIT License
Sometimes two questions with equal tag sets are added. The questions are (always?) different.
Add a template for a question so that everything needed is covered, similar to that of examtool.
Make the library more modular, so that the e.g. the Question class can be replaced by the one from examtool or vice versa. This is one step towards a symbiosis of examgen
and examtool
. (This is worked on in the refactor
branch.)
The algorithm should be more formally analysed and the properties be more clarified. Right now it's very "intuitive".
When editing a document in Vim, it would be nice to be able to generate the questions by running the command and automatically insert the result in the questions-environment.
However, previous ideas (#2) have failed.
Currently we output using the exam class \question format. However, sometimes it would be nice to output the question in a \begin{question}...\end{question}
environment instead.
Currently the code is written with the program as the interface. It should be rewritten as a library with a program acting as an intermediary between the user and the library.
If we generate half of an exam, it would be nice to continue later. E.g. write a few questions manually and then generate the rest.
The UI is an example use of the library. Maybe if we split it (#41) we get a concrete example. Now that the UI is inline, it's not a particularly concrete example of the library (API).
This should align well with #31. Put the UI in its own section or something.
As questions without \label will have no tags, the regex returns None. The program cannot handle that case.
This is done in the code, but still output is not written to the file. Is this due to piping, do we need an -o option for output file?
The nada-ten
exam template uses its own constructs. The questions are delimited by \begin{problem}...\end{problem}
.
examgen can be used for other educational purposes more than generating exams. One use is to filter out questions concerning a single intended learning outcome (ILO) as part of formative assessment, i.e. to filter out question that the student must continue to work on.
Since tags in the question databases are colon separated, they can contain spaces. How does this work with the command-line interface? Can we have -t "an ILO" "a second ILO"
?
The detex(1) utility used to print the question in the terminal removes all math-mode code. This is bad. Instead of using detex(1), we could use another TeX pretty-printer—one which can handle math-mode. One possibility is using HeVeA 1 combined with lynx(1) or links(1). Or use tex2mail(1) for math-mode
code, then detex(1) for the rest.
We want to handle inter-question dependence. Some questions might be follow-ups to previous questions. This should be detectable by tracing the questions’ labels. If we take this into consideration we do not risk including the follow-up question without the first one.
Since the user tags the questions selected in interactive mode, it would be good to save this work. So these tags from interactive mode should be put into the code. This way the questions are correctly tagged when the exam is later used as a questions database.
Mention that Lennart came up with the idea of connecting intended learning outcomes to each question, then generating the exam based on the intended learning outcomes.
Clarify the descriptions, e.g. what is "manual mode", "interactive mode", "prettifying"?
See this for an example: https://pymotw.com/3/readline/
That might help to fix what's in the readline
branch.
Currently things like \uplevel{\section{Other questions}}
are taken as part of the preceeding question. These should be handled so that they will be excluded from the generated exam.
An exercise
environment might have an optional argument, this argument will most likely not work with the \question
command, so it should be removed. Probably a regex is required.
In some cases there is a valid use of a minimum and a maximum number of questions, e.g. when creating exercise sheets for a particular intended learning outcome (ILO).
Good command-line options are probably -m
and -M
, min and max respectively.
We can also read out frame environments from the slides. Then we can use material from the course as inspiration rather than only the old questions. That might break converging to very particular questions.
Frames might be very specialized, so maybe read out a subsection instead.
Currently there is no error handling in the code. This should of course be added to improve usability. (The error messages can be quite hard to understand at the moment.)
Write an abstract for the documentation capturing overview of features.
Some questions use "exotic" packages, it would be nice to extract them and put them in the preamble automatically. Lennart's exam generator uses the following format:
\usepackage{enumitem}
\usepackge{listings}
\question[3]\label{q:typomvandling}
Utifrån de typomvandlingar som sker i programmet nedan.
But this restricts the format to only one question per .tex-file and ones the question is incorporated in an exam, this information is lost --- so the exam cannot be used as an input database to examgen.
However, questions formatted like this can be used with examgen, simply add the following filter to the Makefile
:
.PHONY: filter-usepackage
filter-usepackage: questions.tex
cat questions.tex | grep "usepackage" | sort | uniq >> preamble.tex
sed -i "/^\\\\usepackage/d" questions.tex
exam.pdf: filter-usepackage
One must still manually resolve option conflicts though.
If interactive mode prints all output needed for interaction to stderr instead, then examgen can be used in interactive mode from Vim too, e.g.:
:.! examgen -d *.tex -t topic1 topic2 ... topicN -i
to have examgen's output inserted into the current line in the buffer. And that line is probably between \begin{questions}
and \end{questions}
.
examgen -d infosak-questions.tex -t E C A foundations msb infotheory crypto auth usability lvlltrl trustcomp passwd
When the above is run, one question with a subset of matching tags is matched. After that no question is matched, although there are many questions which should match.
Read back the exam, add statistics to each question and write it back to disk. We can also do the same for the exam as a whole.
This way we should be able to keep statistics similarly to examtool.
We should add a separate tool that handles the statistics: \eg examgen
for generating the exam and examstats
to handle the statistics and follow up. Then maybe rename the repo to examtools
.
Instead of requiring that a question is a subset of the required tags, we want to do the opposite. A use case for this is to find all questions which cover a specific intended learning outcome (ILO). E.g. if a student has passed most ILOs except one, then we can filter out questions and exercises covering exactly that ILO.
Occasionally one feels the inspiration flowing and one writes a full exam manually. To aid this work it would be nice to have a validation option which simply processes the questions of an exam and checks whether all ILOs are covered.
Instead of handling only exams as questions databases, handle \begin{exercise}
and \end{exercise}
environments from textbook, lecture notes or even slides source, e.g. LNCS, AMSbook etc. This also includes the possible solution environment too.
If we write the chosen questions to file directly, then we will not loose them if we kill examgen or similar.
When running interactive mode, the editor (at least Vim) cannot detect any file type. Thus it would be nice to insert a modeline, e.g.:
% vim: filetype=tex
We can let examgen(1) generate the code for the entire document, i.e. preamble, title, intro etc. We can use certain command-line arguments for date, title etc. The authors of the exam should be the authors of the included questions, i.e. examgen(1) has to generate the authors list. The alternative is to use the examiner as the author (more editor) and credit contributors otherwise.
Another requirement when automatically generating the preamble is to find what packages are actually needed. The required packages can be fetched from their respective questions database. Although this approach might include some unnecessary packages.
When one has discarded all questions but not yet covered all required tags, one must be able to reload the database to start from the beginning --- or at least continue without any more questions.
Some questions are slight modifications of older questions, then a question and its parent should not both be included. We need a similarity metric to estimate whether two questions are too closely related, then we can automatically discard too closely related questions.
It might not be sustainable to keep the tags in the \label
of a question. One alternative approach is to have specially crafted comments, e.g.
\question\label{q:SomeLabel}
% examgen: tag1;tag2;...;tagN
% examgen: more;tags;will;be;appended
A difficult question.
Edit: Instead of "examgen:" it is better to have something describing what it actually is, so "tags:" is better.
\question\label{q:SomeLabel}
% tags: tag1;tag2;...;tagN
% tags: more;tags;will;be;appended
A difficult question.
Sometimes we want to get all questions on a particular topic. This relates to #27. Add --all
or -A
option for this.
questions <ILO1> ... <ILOn>
: searches the questions databases for questions matching the given ILOs.This would essentially be
DBs = $(find . -name \*.tex)
examgen -C -d $DBs -t $*
grade <exam-ID>
: this allows entering results from an exam to store statistics.The <exam-ID>
could be the filename. Then stats can be stored with each question in the actual exam.
stats <question-ID>
: prints the statistics for the question in question.The <question-ID>
could be the hash of the question, similar to hashes of commits in Git. Stats for a particular question can be extracted by summarizing from every available exam.
Add the full structure from OpenSecEd as an example: Makefile with tags and question databases, question databases found with the learning material.
The example exam is also a bad example: instead of topics and difficulty, it should be based on intended learning outcomes more directly --- to encourage better generation of exams.
Add an option (-q
?) which students can use to do quizzes. I.e. show the question first, then print the answer.
Since invocation of examgen
from make is kind of central, an example Makefile should be included --- e.g. the one from README.md
.
Currently the code examgen ... > questions.tex
in a Makefile
yields an error message in the file questions.tex
(if an error occurs).
In interactive mode, in addition to just letting the user edit the tags, let the user edit the question text as well (in their favourite editor).
The dasak-*.tex
and infosakb-*.tex
are old and should be updated.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.