Giter Site home page Giter Site logo

apa's Introduction

apa

Accessible Platform Architectures WG deliverables

Important Note

Many of the deliverables formerly maintained here have now been moved to individual repositories, as detailed in the table below. Future GitHub issues or pull requests should be contributed to the new repositories. Such contributions are welcome.

Document Title New Location
Framework for Accessible Specification of Technologies w3c/fast
Inaccessibility of CAPTCHA w3c/captcha-accessibility
Accessibility of Remote Meetings w3c/remote-meetings
Synchronization Accessibility User Requirements w3c/saur
Natural Language Interface Accessibility User Requirements w3c/naur
RTC Accessibility User Requirements w3c/raur
XR Accessibility User Requirements w3c/xaur
W3C Accessibility Maturity Model w3c/maturity-model
Media Accessibility User Requirements w3c/media-accessibility-reqs
Payment Accessibility User Requirements w3c/payment-accessibility-reqs

apa's People

Contributors

bruce-usab avatar cptvitamin avatar dontcallmedom avatar halindrome avatar janinasajka avatar jasonjgw avatar joshueoconnor avatar michaelficarra avatar plehegar avatar realjoshue108 avatar ruoxiran avatar samuelweiler avatar sehollier avatar steve-noble avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

apa's Issues

CAPTCHA alternative

www.kasada.io have a product which uses in browser checking without any user interruption. How it was explained was a little like bitcoin mining in the browser, with the algorithm increasing in complexity for failed attempts and causing the attackers costs to increase. The request a demo page uses the technology to protect fraudulent signups.

It doesn't flag screen reader behavior as suspicious, but it does require the user visit from the browser and it isn't free.

4.16 Interaction speed

REQ 15c
To have an avatar it's a good idea and I think it's something that is starting to raise from different needs (see REQ 10a). Maybe an idea is to have THE SAME avatar to help in different situations and to guide the user on customisation settings of the VR/AR platform/app.

The avatar should be customisable itself, to make the user comfortable and feel friendly.

Add requirement for scalable text/UI?

User need: Magnification context and resetting

Can we add an extra requirement to allow for scalable font sizes in both UI and content? This could negate the need for screen magnification. This (and many other adaptations, even at the same time) has been done successfully in research, and APIs like Dynamic Text on iOS and large text on Android are getting close.

This needs to have the XAUR label, but I don't have permissions to make that change on this repo.

Original feedback email, for reference

Typo

On XAUR, at paragraph 3, first point of the list:

'Over emphasis on motion controls. The are many...'

Should be:

'Over emphasis on motion controls. There are many...'

Media Synchronisation references

Filed on behalf of John Paton RNIB (original email to RQTF list https://lists.w3.org/Archives/Public/public-rqtf/2020Nov/0016.html)

A couple of BBC Whitepapers may be of interest. The Impact of Subtitle Display Rate on Enjoyment Under Normal Television Viewing Conditionshttps://www.bbc.co.uk/rd/publications/whitepaper306 and Live subtitles re-timing proof of concepthttps://www.bbc.co.uk/rd/publications/whitepaper318. The second one is more for interest and the tl;dr version is that the video encoding for HD video takes about 5 seconds so by tweaking the workflow they can regain that time and reduce the lag between video and subtitles. I don't know if that is in scope for the requirements doc.

4.4 Immersive personalisation

REQ 3a, can be extended proposing some sort of 'categorisation' support for the user, in combination with Symbols. Symbols can be related to categories, so the user can create semantic sets of objects. To have categories helps users to remember Symbols semantics. They should also have the possibility to create custom categories in order to have their own categorisation system - other than default ones.

REQ 3b, users may benefit to have an option to enable/disable autoplay of content like video/audio/animations.
Also, they may benefit on having explicit controls on playing/pause of any animation/audio/video happening on screen (or if audio, even not visually presented but just playing).

Typo

The word "viscious" is actually spelled "vicious".
G :)

REQ 3a: Symbol sets too specific?

I'm wonder if this is really representative of the need and solution. When I read "symbol sets" I think unicode. When I think of AAC, I think of pictures, and they're not the same.

Can we be less specific

4.13 Immersive time limits

On REQ 12a, users may benefit on having an option to enable/disable a clock always shown, e.g. on top of the screen/bottom of the screen, to not lost track of time.

They could also benefit on having an option to display time passed from the start of the current VR/AR session. This way they can control time spent on VR/AR without having an alarm set on a pre-defined time limit - as proposed on REQ 12, that may not be liked by everybody. A limit and an alarm may cause anxiety on some users.

They should be able to decide between to always have an eye on spent time, or to set a time limit (plus an alarm, eventually).

[charter] update to most recent template

Charter draft is not using the most up-to-date template, as evidenced by the security and privacy line. I think the IPR policy text todo's would also be fixed by that update.

Ability to control object information announcement

There should be some way to choose set of meta information such as the vertical position and the size of the object, which may be crucial sometimes but not every time, by using a input command or a gesture.

4.9 and 4.10 similarities for notifications and updates

4.9 currently focuses on notification methods for magnification users, but uses general "AT" in the requirement. 4.10 then specifically calls out announcing UI updates for screen reader users.

I wonder if these could be combined to provide guidance for announcements and updates geared toward people with visual disabilities as a whole, with a focus on audio and/or haptic feedback. That feedback would presumably be provided regardless of the way the user is interacting (gestural or not).

Visual and audio guidelines

There should be some easy way to navigate in the environment for people with different abilities for example, visible glowing lines that would guide the users to their destinations, and these guidelines could be made audible based on their spatial position.

Public Working Draft Suggestions (Web)

Note: I can move this into my other issue as well, but they're not related

Overall:

  • Suggest removing Flash/Silverlight success criteria due to low usage and EOL on Flash by Year-End 2020
  • Clarify usage of roles as many are unnecessary especially with current HTML, and it creates confusion with devs who don't yet know the difference
  • Add pure HTML examples to Authoring practices and give them more prominence to illustrate the above

Captions (Live) 1.2.4: Add in clear distinction between how Captions differ from Subtitles and language guidelines. We've had a lot of questions as to what is required in this area, and I've had to pull from the Language of Parts guidelines and others to come up with something. Add in a success criteria that states that the Captions must match the Voice-Over language of the video. This would extend to 1.2.7 per language criteria.

Headings & Labels 2.4.6: Expand on heading usage and impact on page flow. (eg. Should be used to introduce content, not decorate text, and should not be the child of another element)

Focus Visible 2.4.7: There's a lot of cross-over with 2.4.11, so I would suggest condensing this into a flat guideline and adding success criteria based on how to meet moving this into Level A Compliance. The language can be cleaned up to say that the border should be at least 2 CSS pixels (I would propose 3) around the element, and would suggest possibly moving to the 4.5:1 contrast ratio for greater clarity. Based on conversations with colleagues though, this is a contentious one since the CSS created to meet this criteria needs to be closely monitored. In my opinion, this should be left on the browsers to come up with something more robust.

Target Size 2.5.5: Move down to AA Compliance. This should help to alleviate concerns on responsive views and at least partly compensate for pixel dense screens especially on mobile. Also suggest specifying a minimum width and having aspect ratio of the image be considered since most targets are not square.

[NEW] Viewport Baseline 4.1.4: State here that the default viewport should match a baseline CSS pixel width and height of 1920 x 1080 (1080p) subject to change. I would argue this could go up to 2560 x 1440 (1440p) and work as a more future-proof baseline especially considering new-generation phones. Would relate to 1.4.10 Reflow and make things clearer.

Copy note/list clarification

In section 3.2 it says "There will be many modality aspects for the developer and/or content author to consider:" - it looks like there's going to be a list of things there, but a "NOTE:". (The section ends by saying what's coming up in the input section, so I'm not sure if the colon relates to "the following sections".)

This needs to have the XAUR label, but I don't have permissions to make that change on this repo.

Original feedback email, for reference

Description of Google recaptcha v3 fallback

About Google recaptcha v3, the article states :

However, as the failure fallback remains the presentation of a traditional CAPTCHA, it remains imperative to do better by users who require alternative CAPTCHA options as also enumerated here.

After reading Googles documentation about recaptcha v3, I think that it is not entirely correct to say 'the failure fallback remains the presentation of a traditional CAPTCHA'.
I understand that recaptcha v3 only sends a score back. It is up to the developer to decide what action to implement in case the score is under a certain threshold. So my impression is that this solution gives the freedom and the responsibility to the developer using this captcha to choose en accessible fallback.

AbleGamers reference

I realise it may just be a character limit in the footnote IDs, but is it possible to call the "able-game" footnote ID (used in section 3) "able-gamers" as that's their full name?

This needs to have the XAUR label, but I don't have permissions to make that change on this repo.

Original feedback email, for reference

Section 2.2: User need 2 – It isn’t clear what the need is here.

[OP Jason White against branch https://github.com/w3c/apa/tree/RTCUserNeeds]

Braille devices used in conjunction with a host computer are almost always controlled by whatever screen reader is running on that system. The screen reader typically provides a command to turn speech output on and off, so the user can turn speech off and use the screen reader entirely via the braille display (as I typically do in some circumstances such as teleconferences). Thus, I’m wondering what is being asked for here that can’t already be achieved by turning speech off and reading/navigating the application with the braille display. Notifications should be shown as alert messages on the braille device (typically replacing whatever the user is reading and persisting for a user-configurable amount of time). Is this requirement really distinct from the earlier requirement for notifications to be delivered to AT?

Knowledge domain accessibility

Abstract

Support for the presentation of, and interaction with knowledge domain content on the web is uneven at best. We note renewed activity in the World Wide Web Consortium (W3C) to ameliorate this situation in certain knowledge domains and inquire after generalized approaches which might be codified to enable knowledge domain experts to provide more robust support in their area of expertise. We focus specifically on providing accessibility support across knowledge domains that can enable and empower persons with disabilities in their education and professional relationships.

Problem Description

Accessibility to text-based web content and to interactive desktop-type widgets, such as checkboxes, menus, and sliders, has been made quite robust and reliable for persons living with sensory and motor disabilities. We are making progress extending support for persons with cognitive and learning disabilities.

Recent standardization work has begun providing accessibility support for graphically expressed knowledge through SVG.

Many intellectual disciplines, however, routinely express knowledge and facilitate discourse utilizing knowledge domain specific symbology. Mathematics is expressed and interacted with symbols and semantic constructs radically different from those used in music scoring, both of which differ markedly from linear textual presentation, even where semantic textual structures have also been made accessible.

Examples of knowledge domain symbologies include, but are not limited to:

  • Mathematics
  • Physics
  • Chemistry
  • Linguistics & Philology
  • Economics
  • Music Scoring

Additionally, there are common practices even in textual content not well supported for accessibility for users who rely on, or significantly benefit from accurate synthetic text to speech (TTS) content pronunciation. Examples here include:

  • The study of history where publications routinely employ foreign language words, phrases, and entire paragraphs in line.
  • The acquisition of foreign language skills
  • Interlinear glosses such as in the study of sacred and ancient texts where each ancient word is often "hyperlinked" to standard dictionary resources.

NOTE: The term "hyperlink" is set off in quotations because the common practice predates hypertext technology, and is commonly rendered in print through defined symbols for "cross referencing."

Toward Solutions

  • We cannot expect assistive technology vendors to solve these problems because the problems are largely unique knowledge domain by knowledge domain, and good solutions will require expertise with that knowledge domain's symbology and functional semantics.
  • Improved presentation must also lead to support for robust interactivity. It must be possible for persons with disabilities to interact with semantic elements of knowledge domain symbology, e.g. illustrate a mathematical proof step by step.
  • We need to look for common traits across multiple knowledge domains, e.g. We likely need to denote the use of a specific knowledge domain symbology across some span of content embedded within standard web page constructs.
  • We need to be on the lookout for particular widget types used by certain knowledge domains for which we lack accessibility support.
  • We need to be on the lookout for weaknesses in available markup to sufficiently express and interact with specific knowledge domain content.
  • We may need normative specifications for declaring in metadata what symbologies are utilized in a given individual publication. It should not be necessary to parse an entire publication to discover which symbology systems have been employed.
  • We will likely want best practices authoring guidance. It is highly likely that individual publications will include spans of content from distinct multiple knowledge domains.
  • We will likely require defined mechanisms for conveying correct terminology to accessibility APIs, as well as defined mechanisms for insuring content is correctly pronounced by TTS.

4.7 focuses on high contrast solutions

I wonder if 4.7 could be expanded to include color customizations that aren't just in high contrast. Many users could benefit from color customization but may prefer to not rely on a high contrast solution specifically.

4.4, 4.11, and 4.19 similarities, and standardizing personalization and text transformation

I'm curious about how 4.4 (personalization with a focus on symbols), 4.11 (signing of text descriptions), and 4.19 (customization of text for captions or subtitles) are split out. I think these are all getting at differently flavors of ways text-based content might be presented in an immersive environment, but focuses on one scenario for each.

I think it would be valuable to consider one requirement for text-based content personalization (language, symbols, signing, etc.), and one for text transformation (font size, font, etc.) for any scenario that has text or a text equivalent. For example, a user who wants text descriptions to be presenting in sign might also want captions presented the same way.

[RNIB] 4.16 Interaction speed

Filed on behalf of RNIB: Original comments sent to APA list

Regarding User Need 15: Some level of user-set-automation to pre-empt common interactions may help here. This could be set in a menu or a simple ML process (ie “The last 3 times [X trigger occurred] you [initiated Y interaction], would you like me to do this every time?”)

character based section I18N considerations [I18N]

2.1.1 Traditional Character-Based CAPTCHA
https://w3c.github.io/apa/captcha/#traditional-character-based-captcha

While some sites have begun providing CAPTCHAs utilizing languages other than English, an assumption that all web users can understand and reproduce English predominates. Clearly, this is not the case. Arabic or Thai speakers, for example, should not be assumed to possess a proficiency with the ISO 8859-1 character set [iso-8859-1], let alone have a keyboard that can easily produce those characters in the CAPTCHA's form field. Research has demonstrated how CAPTCHAs based on written English impose a significant barrier to many on the web; see Effects of Text Rotation, String Length, and Letter Format on Text-based CAPTCHA Robustness [captcha-robustness].

The above text has several potential issues:

  1. ISO8859-1 ("Latin-1") is possibly not the best reference here, since probably is what is meant are ASCII letters and digits. The difference between Latin-1 and ASCII are the various accented letters, which are not widely used in CAPTCHA.
  2. Virtually all computing systems have a means of inputting ASCII, so saying that users might not have a "keyboard that can easily produce those characters" is probably false.
  3. The reverse is not true. Producing CAPTCHA images containing non-ASCII text may prove difficult to use if the user does not have the appropriate keyboard available. It is difficult to determine on the server side what the input capabilities of a given user agent includes.
  4. Many characters or writing systems are difficult to discern when distorted. This includes accented Latin-script letters, cursive scripts such as Arabic, and of course Han ideographs.
  5. It has been observed that using actually words for CAPTCHA improves accuracy, but or course this depends on being fluent in the language in question.

*This comment is part of the I18N horizontal review. *

Audio and Video Quality

[OP Jason White against branch https://github.com/w3c/apa/tree/RTCUserNeeds]

Section 3.1: I think this can be clarified on the basis of research (as discussed at last week’s meeting) and converted into an audio quality requirement.

Section 3.2: this overlaps with 3.1 in that both refer to audio requirements. I would suggest separating out the audio quality and video quality requirements and treating them as distinct issues, without overlap. I also think the video requirements can be specified as user needs/requirements as in section 2. Given the existence of EN 301 549 guidance for both audio and video, it isn’t clear why these aren’t set out as requirements in the format of section 2. Having relevant policy in place would seem to me to be a rather strong reason for specifying them as requirements – more so than for some of the possibilities already proposed in section 2, for which we don’t have much, if any, supporting research and experience – which is why we’re calling for public review, after all.

Intersection Observer review

The WebPlat WG has just published the FPWD of Intersection Observer, triggering wide review. Thanks to its time in incubation, the spec already has good interop, so we're hopeful of moving to CR reasonably quickly.

We don't think this specification affects the UI, but would welcome a second opinion from the APA WG if possible. Feedback no later than Friday 27th October would be appreciated, feedback earlier even more so.

Please file issues on the Intersection Observer repo.

If this request is better made through another channel, please let me know. Thank you.

default quality setting for audio and video

there is a default setting for audio/ video quality for export.

for High Quality it is 192 Kb for audio and 15 Mb for video
for Med Quality it is 128 kb for audio and 5 Mb for video
for Low Quality it is 96 Kb for audio and 384 Kb for video.

these settings can also be changed before exporting the video each time.
My query is that ;
Is there any way to modify these settings so that you do not have to change every time before exporting the video.

Add requirement for shape as well as color be used?

User need: colour changes

Can we add an extra requirement that says that the content is designed so that shape is used instead of just colour?

I suggest that because if the requirement is met, and the user has only minor difficulties with contrast, that could be sufficient for them to not need to engage any specific display mode, and thus be a more immersive experience. There are several examples from the world of games that we could reference (that list actually misses one that I think is particularly good, but I’d have to ask Ian Hamilton about it, as I believe he gave it as an example but I can't remember its name).

Note: I understand this is similar to an existing WCAG requirement; not sure if the overlap is good, or we are trying to avoid it, but it is another—and a nicely immersive/unobtrusive—way to meet the user need.

This needs to have the XAUR label, but I don't have permissions to make that change on this repo.

Original feedback email, for reference

Get the object information with respect to the position of observer

Help users to explore the environment using input gestures, and the objects should have a meta information associated which would include the distance from the person and co-ordinates for the direction with respect to the observer.
For example, Jasmine flowers in a vase, 2 meters at your 11 o’clock”.”.

Marking objects as "important"

As there may be so many things in the virtual environment, there should be some easy way to figure out the important information, objects or interactions. If there could be some way to mark any object as important, that could be recognized by ATs like screen readers or any AT that could highlight or make that object glow so that it is easy for low-vision users to locate.

Editorial suggestion for RAUR pre-WD => Note

New comments - thanks @scotthollier

Abstract: great, no issue.

Introduction:

I’d like to see the ‘what is RTC? Reworked. Here’s how it stands: 

What is Real-time communication (RTC)?

The traditional data exchange model is client to server. Real-time communication (RTC) is game-changing as it is enabled in part by specifications like WebRTC that provides real-time peer to peer audio, video and data exchange directly between supported user agents. This enables instantaneous applications for video and audio calls, text chat, file exchange, screen sharing and gaming, all without the need for browser plugins. However, WebRTC is not the sole specification with responsibility to enable accessible real-time communications, as use cases and requirements are broad - as outlined in the IETF RFC 7478 'Web Real-Time Communication Use Cases and Requirements' document. [ietf-rtc]


If the reader wants to know what RTC is, it currently requires a bit of digging through the text. I’d like to see the first sentence as ‘RTC is…’  then a clear a definition in response to the heading ‘What is RTC?’ and then go onto explain the relevance of WebRTC, game-changer, etc.  
RTC accessibility: great. This is a good example of how the definition is more clear
User needs definition: similar to first comment. Currently reads as: 

---.

2 User needs definition

This document outlines various accessibility related user needs for Accessible RTC. These user needs should drive accessibility requirements for Accessible RTC and its related architecture.

User needs are presented here with their related requirements; some in a range of scenarios (which can be thought of as similar to user stories). User needs and requirements are being actively reviewing by RQTF/APA.

Again given the section is called ‘user needs’ definition’, it’d be good to start with ‘A user need is…’ and explain it, then go onto the rest of the text. At the moment it talks about user needs but in my opinion doesn’t define it. 

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.