The jobsite app itself is fairly accessible, though does have some issues with VoiceOver navigation, etc, so in testing, it’s not exactly at it’s best.
In order to make an app fully accessible, it’s important to run a series of tests before the app is finally compiled and released, especially if through the apple app store, etc. Apple’s Developer guidelines cover the accessibility development and testing guidelines for submission, although they’re not mandatory sadly.
When making hardware and software in tandem to become accessible as assistive technology, challenges are faced, especially if a developer without experience is approached by a customer / product owner like myself as an example, where assistive technology needs are critical, the challenge isn’t just demonstrating the challenges and issues, but the bigger challenge is where the developer needs to learn certain methods of accessibility integration and development, getting a feel for what resources there are, what can be added, where the problems are based on how the app is coded, what GUI elements, components, window classes, etc are used and how objects / functions are tagged / described so that a screen reader like apple’s VoiceOver can speak out content.
To make the T1 a product which can speak natively with a smart device, either android or iOS based, the app needs to be easy to navigate by touch using the screen reader, so when you touch areas of the screen, each element must be labelled and configured to be identified either as a text element, button / modifier, edit field, image, etc but in doing so, it’s essential that there’s a logical flow to how the screen reader navigates the GUI of the app.
The soon to be Live Mode interfacing the T1 to your smart device, may well be an advantage, the question here is, how will a screen reader behave in conjunction with the live view? will the live view be a constant refresh edit text field, or will it be an annoying graphic which a screen reader can’t interact with without the right input given?
Integrating accessibility in to the Rock app is not just about having a screen reader running, but actually utilising tools like the speech synthesis core components, system accessibility tools for speech output, etc, so that even if you didn’t have a screen reader running, or even if you didn’t have a sight impairment, but wanted measurement spoken to you while you’re working, as each person has different needs, this could be a useful implementation.
Including audible alert tones is also not just about accessibility, but notification of a function. Even using the system speech core service to announce an event, such as “Device Connected” etc, this would be a game changer in interfacing and engagement. so you’re not constantly looking at or handling a phone / tablet.
Accessibility integration is also about how data is passed and handled. Functions like sending the measurement from the device directly to the smartphone or tablet as a live text entry to any app with an edit field or text entry field, where the data is live pasted, like you’d find on bluetooth micrometers and calipers, etc, you’d be surprised how that data can be used more effectively between applications, say, a good example being a 2d / 3d cad system, you’re taking measurements of a part, cad can receive text input for a part being drawn and manipulate the object to the specified size based on the measurement tool. this does work and exist and has done for years. I’ve even been using this in the past.
lew