ls1intum / themis Goto Github PK
View Code? Open in Web Editor NEWThemis - Artemis Tutor App for iPad
Home Page: https://ls1intum.github.io/Themis/
License: MIT License
Themis - Artemis Tutor App for iPad
Home Page: https://ls1intum.github.io/Themis/
License: MIT License
The context menu interferes with the highlights when the user keeps the finger on the submission content for too long before dragging. We should either solve this or disable it.
Also for programming exercises, we should remove the 'Feedback' option from the context menu. It's not available for other exercise types and can cause invalid selections and confusion.
This issue was found during the user study.
We have identified a bug in our app where the diff that highlights changes made by students to template code is not displayed in read-only mode. This issue is affecting the ability of instructors to review and provide feedback on student work.
Currently, when a student makes changes to the template code, a diff is generated to highlight the changes made. This is an important feature for instructors, as it helps them to understand the changes made by students and to provide targeted feedback.
However, we have found that the diff is not displayed in read-only mode, which is making it difficult for instructors to see the changes made by students. This is a critical issue, as it hinders the ability of instructors to review student work.
We have identified a performance issue in our platform that affects the user experience when resizing the CodeView with larger files. When resizing the CodeView, which is caused by adapting the sidebars, there is a noticeable lag in the redrawing of the line gutter and source code.
This issue is affecting the user experience for users who need to grade larger files, and it can slow down the grading and review process. We have received feedback that this lag is frustrating and hinders the ability to work efficiently.
To improve the performance of the CodeView, we propose implementing a series of optimizations that will reduce the lag when resizing the view. These optimizations could include implementing more efficient rendering algorithms, optimizing memory usage, and improving the performance of the backend logic that handles the redrawing of the line gutter and source code.
The following issues were found during the user study:
Desirable:
Please create individual GitHub issues for those listed above and link them here.
The cancel button on the assessment view shows the 'Delete' option and cancels the assessment when tapped. This should not be possible.
This bug was found during the user study.
One of the user study participants suggested making the correction sidebar more expandable.
Current state:
If the user has an Apple Pencil, the inline feedback mode seems to get toggled automatically. This leads to the following issues:
The toolbar button doesn't work as expected. The user needs to toggle it on and off several times to manually enable/disable the feedback mode
Inline feedbacks can be added on files belonging to the solution repository
This bug was found during the user study. I could not reproduce it myself since I don't have access to a real device with Apple Pencil.
The following issues were found during the user study:
Critical:
Moderate:
Desirable:
Please create individual GitHub issues for those listed above and link them here.
Currently, when a user starts grading a new exercise, they need to switch to the web client first, to accept the grading instructions. This extra step can be time-consuming and interrupt the grading process, leading to delays in completing assessments.
To improve the user experience and streamline the grading process, we propose adding a feature that allows users to accept grading instructions directly within the app. This could be achieved via an alert that appears within the app when the user is ready to start grading.
The alert would display the grading instructions along with a button to accept them. Once the user accepts, they will be able to continue with the grading process without having to switch to the web client.
This feature will save users time and make the grading process more efficient, which will be especially useful for users who grade projects frequently.
Several study participants had trouble highlighting paragraphs due to this problem. The issue is less noticeable on the simulator.
See #185 (review)
Currently, when a user selects a predefined correction guideline feedback template with blank "Feedback" field, the default text that is shown in the feedback box is not very helpful. The default text reads "Add feedback for students here (visible for students)", which is too generic and does not provide any guidance for the user and student.
As a result, we would like to improve the placeholder text for correction guideline feedback templates. Our goal is to provide students with more specific feedback on how to improve their submission following the correction guidelines, even when there is no text provided by the instructor.
To achieve this goal, we propose the following solution:
Modify the placeholder text based on the selected correction guideline feedback template.
When the user selects a range containing only empty space, an inline feedback with no visible highlight is created. Instead, it should either highlight the whole line or not open the 'Add Feedback' sheet at all.
This bug was found during the user study.
Split view would be useful for scenarios where the user want's to see an earlier assessment while assessing a new submission. Enabling it is quite straightforward: remove the tick from the 'Requires full screen` checkbox in project target settings (general tab):
Nevertheless, Themis is made for the iPad, so it does not work well in small screen sizes. Splitting the screen leads to a result similar to running the app on an iPhone. That's why the app needs to be adjusted for small screen sizes first.
Requirements:
Steps to reproduce:
Switching between repos somehow makes the file tree detached from the correction sidebar and it becomes unaware of which files are currently open.
This bug was found during the user study.
One of the user study participants mentioned that it would be nice to have a short tutorial video to train tutors to use Themis. We were already planning to include such videos in the documentation, so this issue will be closed with a documentation PR.
Currently, the programming exercise assessment view can only apply syntax highlighting for Java and Swift exercises. It should be extended by adding other languages to the list of supported languages in CodeEditor.
Adding a suggested feedback to a programming submission leads to a crash when undone.
Steps
Currently, we use the testCase.testName
, text
and detailText
properties of automatic feedbacks to show them on the correction sidebar. However, we do not have any logic for handling automatic feedback related to SCA or submission policy issues. Such feedbacks contain prefixes in their text
field defined in Feedback.java:
We should check for these prefixes and handle them.
This is how the automatic feedbacks with test cases currently look:
This issue was reported by @Strohgelaender
The following issues were found during the user study:
Critical:
Desirable:
Please create individual GitHub issues for those listed above and link them here.
Current implementation:
When switching between the different programming exercise repositories, there may be a loading animation as well.
Suggested by @maximiliansoelch when reviewing #173
The following issues were found during the user study:
Moderate:
Desirable:
Please create individual GitHub issues for those listed above and link them here.
This issue was found by @maximiliansoelch when reviewing #187
When the repository is changed, if the file that is currently being viewed exists in the new repo, it should not be closed. Instead, its content should be updated.
Current state:
This suggestion was given during the user study.
On the skeleton loading view, the last table has a divider line, which looks a bit odd
Reported by @maximiliansoelch when reviewing #173
The web client allows selecting a line by tapping on a + icon next to the line number. However, in Themis, the user needs to drag their finger from the start of the line until the end of the line to achieve this.
CourseView shows some exercises in the 'Currently in Assessment' section, even if their submission deadline is not due yet. When the user tries to assess such an exercise, the app shows an unknown error.
Some users expect the feedback mode to be enabled by default and try highlighting elements directly after starting an assessment. They suggested making it enabled by default. However, I think this should only be done for non-programming exercises since the users are likely to open files and scroll to read some code before adding inline feedback.
The exercise deadlines on the course overview are not updated on pull to refresh:
When the user pulls to refresh the ExerciseView, the submissions are not updated. This creates problems when, for example, a submission gets assessed over Artemis and the user wants to see the latest state on Themis without going back and forth between CourseView and ExerciseView.
Currently, the highlight feature is not triggered when the user simply taps on text. To select a word, the user needs to put their finger on a word and slightly drag to trigger it.
This issue was found during the user study.
Currently, our user interface displays both static code analysis feedback and test case feedback in the same location. While this design may have worked well in the past, it has become difficult for users to distinguish between the two types of feedback.
As a result, we would like to improve our user interface by visually separating the static code analysis feedback from the test case feedback. This will make it easier for users to quickly identify which feedback corresponds to which type of analysis.
To achieve this goal, we propose the following change to our user interface:
Add a new section to the user interface specifically for static code analysis feedback. This section will be visually distinct from the test case feedback section, making it clear to users which feedback corresponds to static code analysis.
This change will make it easier for users to interpret and act on the feedback provided by our application, leading to a more efficient and effective grading process.
The following issues were found during the user study:
Select university button is too close to the home button in landscape mode on small iPads.
Select button in the university selection sheet does not state whether it selects a university or the custom instance. Change it to 'Select Custom Instance'
Test servers should be included in the university selection sheet by default in the developer mode (we also need to create this mode first)
Adjust input validation based on the selected university
Please create individual GitHub issues for those listed above and link them here.
Currently, the user can't see which submission they are assessing. This is especially uncomfortable when there are multiple open submissions on the Exercise View, and the user selects one to assess. Having a label containing the submission ID on the assessment view could be helpful to ensure that the user did not accidentally tap on a wrong submission.
Currently, when giving inline feedback, it can be difficult to precisely select the intended range for feedback. Users are only able to check if the correct range is selected by sliding down the feedback sheet and peeking at the highlighted code. This process is time-consuming and can be frustrating for users who want to provide precise feedback.
As a result, we would like to add an option to edit the selected range for inline feedback. This will allow users to make more precise selections and provide more specific feedback.
To achieve this goal, we propose the following solutions:
Add a split view option that displays both the code and the feedback sheet. This will allow users to see the selected range in context and make adjustments as necessary.
Provide users with the ability to edit the selected range directly in the split view. This could be done by allowing users to drag range markers to adjust the selection.
Update the feedback sheet to display the edited range in real-time. This will allow users to see the changes they have made and ensure that the selected range is accurate.
Overall, these changes will make it easier for users to provide precise feedback by allowing them to edit the selected range in a split view. This will improve the overall user experience and lead to more effective feedback.
The center button implemented in #170 does not center the diagrams as expected. This is most likely because the UML diagram size returned from the server is larger than the actual area that the diagram elements cover.
As it can be seen below, the code editor is not reset, and the old highlight is still there after starting a new assessment.
This bug was found during the user study.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.