FSCast # 255.

February, 2025

 

 

·       We check in with Elizabeth Whitaker to find out Vispero's CSUN and training event schedule.

·       Then we dive into web accessibility and usability with Brett Lewis and TJ Squires.

·       And finally, we get to meet Zülal Tannur, a blind entrepreneur and AI enthusiast from Turkey.

This and more on episode 255.

 

 

Vispero’s CSUN And Training Schedule

OLEG SHEVKUN:

So hello, and welcome to FSCast episode 255 for February, 2025. We've got a full episode today, but first of all, why don't we check in with Elizabeth Whitaker to find out what's going on at Vispero. Hi, Elizabeth.

ELIZABETH WHITAKER:

Hi, Oleg.

OLEG:

Well, the big thing is CSUN conference. What are the dates for that, by the way?

ELIZABETH:

Oh, absolutely. That is going to take place March 11th through the 14th, and that's going to be in Anaheim, California.

OLEG:

And I understand that Vispero has a booth so people can find out more details on CSUN website and Vispero's website as well. Is that correct?

ELIZABETH:

That is correct. So if you go to freedomscientific.com and you look for the heading that says events, you will find it under there. It's going to take place in March. So you will navigate there and find the link to the Vispero CSUN page and it actually lists all of the events that we will be presenting from Vispero.

OLEG:

And CSUN is a four-day event. How many presentations have you got?

ELIZABETH:

I personally have two that I'm working on, and there are several of us actually that have more than one. So a lot of presentations that will be happening. A lot of really interesting information that people are going to be talking about this year.

OLEG:

Let's go through some of the highlights. What do you have for us?

ELIZABETH:

All right. I'm just going to mention a few of these here because that schedule once again is listed on the Vispero CSUN page, but on Tuesday, March 11th at 2:20 in the afternoon, and that's going to be in room Platinum 5, there's a presentation that's going to be given by Ryan Jones, and it's called AI and AT: Balancing Security with Productivity. So that's going to take place on Tuesday. Wednesday, March 12th, at 11:20 there is a presentation called What's New and Coming Up in JAWS Zoom Text and Fusion. And that is Ryan Jones and Roxana Fisher. And again, that is also in Platinum 5. And then also on Wednesday, at 3:20 in the afternoon in Platinum 5, Corey Perlander and I are going to give a presentation called Talk to me JAWS: 10 Tips to Make Documents More Screen Reader Friendly. And then on Thursday afternoon, March 13th at 2:20, also in Platinum 5, Rachel Buchanan and Mohammed Laachir are going to talk about FSCompanion, the ultimate AI JAWS assistant.

Once again, a lot of great presentations going on. A lot on accessibility, kiosk accessibility, Microsoft Teams. So check out that page again, go to freedomscientific.com, look for the heading that says events and you'll find the CSUN page under that. And if you have any questions, you can always email us at training@vispero.com.

OLEG:

And obviously, you've got webinars scheduled. Could you just highlight one that seems most interesting to you at this point?

ELIZABETH:

Sure. So on Thursday, March 27th at noon Eastern, we will be hosting a webinar on navigating OneDrive with JAWS. That's something we get a lot of requests for, so we're going to be talking about OneDrive. And of course, our Freedom Scientific training schedule will come out the 1st of March, and you can go to freedomscientific.com/training, search for the schedule heading there and go to the page in there. All of our March events will be listed.

OLEG:

That's perfect. And also, in March we started celebrating JAWS' 30th anniversary, and I understand that the celebration is scheduled to start over there at CSUN. But also, in our March episode we'll talk more about the history of JAWS and the making of JAWS and where JAWS is now. So actually, our March FSCast will be a recap of some of CSUN's events, but also JAWS 30 special issue. And Elizabeth, I'm really looking forward to see you and to hear from you in March, when you're full of impressions from CSUN.

ELIZABETH:

Happy to come back and talk about that.

 

 

Talking Web Accessibility And Usability

OLEG:

Web accessibility has always been a hot issue in the blindness community. When is something accessible? When is it usable? Is there a difference? Who should take the lead in web accessibility? Website developers? Screen reader developers? Here at Vispero we're privileged to have two branches. One is Freedom Scientific, that we all know about. The other one is TPGi, and that's the branch of Vispero that's in charge of accessibility testing and consulting. So today we have with us people from both of these branches to answer our questions. First, Brett Lewis, senior developer and product owner with Vispero. Hello, Brett.

BRETT LEWIS:

Great to be back on FSCast. Haven't been here with you before, Oleg, so I'm interested in the new style.

OLEG:

And TJ Squires is with us from TPGi side. TJ, hello and welcome to the show.

TJ SQUIRES:

Thanks for having me. I'm excited to be here. It's really cool to share a little bit about what I know, and I'm hoping that the listeners find it interesting.

OLEG:

By the way, what is TPGi all about?

TJ:

So TPGi is the accessibility arm of Vispero. And to put it short, what we do is accessibility testing. And we have different branches within TPGi, but ultimately the large majority of it is to perform accessibility tests and remediation advice for clients.

OLEG:

Well, I'm sort of vaguely aware of what it takes to become a software developer, but what does it take to become an accessibility tester?

TJ:

So I like to think of it as knowing the software. For full disclosure, I am a JAWS user. I have been for 25 years. And so it helps to have an intimate knowledge of assistive technology, I think. Also, to know a little bit about the web. I'm not a web developer by any stretch of the word, but I would consider myself fluent in HTML and ARIA, know enough to be dangerous in JavaScript and CSS. And so to have some of that knowledge is crucial. As whereas to be aware of the standards that are out there for the web content accessibility guidelines or as we call them, WCAG. Of course, with the European Union now they have their accessibility guidelines which ultimately given nod to WCAG. So if you're looking at starting, that's where I would start, is to really brush up on your assistive technology skills and your HTML. Get familiar with the ARIA specification as well as really hammering home the principles of the WCAG and the success criterion, that ultimately we at TPGi test against.

OLEG:

I'm sorry to be asking a potentially unpleasant question, but with what TJ just said, Brett, is your responsibility simply to make sure that the screen reader, such as JAWS, is also following those guidelines or is it more complicated than that?

BRETT:

So it's interesting. And TJ talked about the WCAG standards, and TJ can completely overrule me on this, but my impression of WCAG is largely that it's here's the kinds of information that needs to be available to say something is accessible. I mean, it doesn't give you a lot of the details about things like is this control type known, that kind of stuff, but there are all these things that need to happen. So you need to know if focus changes, and can you perceive all of these kinds of things? So there's the WCAG standards that provide overall accessibility. Then there's the ARIA standards that say, here's how I can take HTML and make it into something that the screen reader can say, and here's the rules about what should be said. So in other words, if I have something on the page that I want to say as a button, ARIA basically says, "Hey, even if it's not a button, I can make it say that it's a button." And it talks about how we translate all of that stuff into these very specific things.

OLEG:

But that busts another myth, because I was kind of thinking that your standards are like the constitution you need to follow, but actually what you just said means that the standards can be misused as well.

BRETT:

Oh, definitely, they can be misused. I mean, everybody jokes about the first rule of ARIA is not to have to use ARIA, right? And so is there a way that we can make this happen with sort of a standard way of doing things? Sometimes ARIA is about trying to force a screen reader or an assistive technology to do what the developer thinks it should do. So an example, I had a web page where someone's like, "Oh, we should make this focusable. You should be as a user able to tab down this page and stop right at this place." And so what they did is they made the entire bottom half of the page a button. And it wasn't a button, but they wanted to make sure that it stopped there and focused that, and gave you attention that this was something. And this broke all these other things. I mean, ARIA really is more about giving you a set of guidelines to accomplish what you think you want to accomplish.

And then on top of that, now there's something called the ARIA Authoring Practices Guide, and they have a bunch of patterns and practices where they say, if you have a tree view, here's a good way to do this. So it's all these different pieces that kind of interact when we talk about standards. I mean, there's WCAG, there's the ARIA, there's the patterns and practices, and now there's even an ARIA-AT group that's trying to come up with standards for when you come to a button, here's the things a screen reader should say. And so this isn't so much from the web developer side or anything, but a user can with confidence go to a website and say, I know if I find a button, I'm going to know that it's a button. I'm going to know that it's pressed if that's the kind of thing that I should know. And so there's this sort of standard set of information that every screen reader should present. That's very much a work in progress at this point.

TJ:

And I think that one of the important things, especially if you're talking about WCAG, is to stress that last word, it's a guideline, and so it is not a hard and fast rule. I have encountered often where I would write up something and give remediation advice sometimes using ARIA, sometimes not using ARIA, and they would come back and say, well, I tried this. And we'll go back in and say, oh, well the screen reader can access it, a keyboard user can use it, et cetera. And so that eliminated the accessibility barrier. And I think it's also worth mentioning... I understand we're talking here on FSCast especially about from a blindness perspective, but accessibility goes way beyond that. It encompasses hard of hearing, it encompasses people with cognitive disabilities, and encompasses people with mobility impairments as well. And so we have to keep in mind all of that.

And in relation to ARIA itself, one of the things about ARIA is I like to think of it kind of like as a translator. Web developers make these really fancy components and the screen reader needs to interpret it as something. And so it may look like a button, it may at its core act like a button when it's clicked on, but just because you use ARIA to tell the screen reader like, "Hey, this has the role of button," the screen reader will then announce it as a button. But if it doesn't have the development behind it, so for example, if it doesn't have the ability to receive focus, if it doesn't have the ability for the enter or the space key to activate it, you can assign the roll of button to anything. And so ARIA is not a one-stop shop just because you assign something. An ARIA attribute does not make it accessible.

OLEG:

When is there too much accessibility? Or, can you think of a situation of too much? And I'm thinking of two specific cases. Sometimes on websites you can hear JAWS saying "Button” twice. So, you can hear, “Close button, button”." And the worst part, I recently saw a "wonderful" iOS application, and if you go through the interface, you reach the top right corner and it said literally this: "This is the accessibility label for the settings button, button."
Why do those things happen?

TJ:

It's misuse of ARIA, or misuse of the technology. When you're working with developers, a lot of them, and I don't want to lump developers into a category because there are some very, very, very smart people out there that I can't even comprehend their intelligence and their ability to write code, but especially when it comes to button, button, they will actually include the word button in the ARIA. And so the ARIA label or the... It's most often from my experience the ARIA label will say settings button. And when we catch that, we usually write it up and say, “Hey, JAWS or NVDA or voiceover or narrator or any screen reader will have the capability of announcing and parsing that, hey, this is a button, so you don't need to say in your label that this is a button.”

Just a fun side story to this. I have had an experience where, this was early on in my accessibility career, I did some remediation and I looked at the web page once the client send it back for retesting. And I've seen announcing things, like the example, close button, settings button, contact-us button, and I'm like, "Oh, this is fantastic." And so shifted away from the page, went back, and I was like, "Oh, I'm going to go back to the settings button and activate it to see where it was." So I pulled up JAWS's buttons list, and – no buttons.

And so that's another reason, the rule of ARIA, don't use ARIA unless you have to is popular is because it sounds great when you're using the down arrow with your virtual cursor to go through it, but as soon as you want to take advantage of many of the numerous options that JAWS has to navigate by elements, if they don't do it right, may not see it at all, even though it sounds correct. And then of course I'm assuming that they're going to reach out, the customer, I would've before having the knowledge, reach out and said, "Hey, JAWS isn’t announcing buttons correctly."

BRETT:

So one of the things that I think also makes this a problem, getting the labels and stuff wrong, for a sighted screen reader user or a non-assistive technology screen reader user, ARIA should have no impact. In my experience with a lot of web developers is they have great intentions. They really do want to make stuff accessible and usable by a lot of people. But they say, "Oh, I know what I do. I looked it up and I attach an ARIA label." And so they go in and they put the label and then they never really experience it from the screen reader point of view. There's no way that they're going to even perceive that label unless they run some kind of assistive technology. And so it can be really hard for them sometimes, because they have good intentions, they said, "Oh, I know, I need to add an ARIA label. I have some accessibility testing tool, and I ran that and it said, 'Yep, everything's got a label.' I'm good to go."

But there's sort of this usability, I mean, you're kind of touching on it with these label questions, there's this whole usability component that just isn't there all the time. I mean, it's easy to get around not knowing something's usable. When I worked somewhere else, one of the things I was doing was working on web accessibility. I was supposed to make it better. Developers would come to us and they said, "I have a list of text on the screen and I know that I needed to make it accessible with the screen reader. And so what I did is I made each list item focusable, so the screen reader user can now tab through every single list item on this page."

OLEG:

The question is do we want to?

BRETT:

Yeah, exactly. It was a horrible experience. And it was mostly just a lack of they didn't know how screen readers worked. And I said, "Look, there's this other mode that you can use to navigate by arrow keys and stuff, because it's just impractical." I mean, they had good intentions, they marked everything up, they knew how to do it, to set focus. It was great, it set everything as you tabbed. I mean, it was technically accessible maybe and certainly not usable by a screen reader.

TJ:

I would argue that it's not technically accessible, because you're creating an undue burden for keyboard users because oftabbing through.

BRETT:

Yeah, exactly. Fair point.

TJ:

And so one of the things that I like to say to clients especially is, "Well, first of all, hire people is part of your QA process. Hire people with disabilities." Which I think we can all agree that is just something that is a no-brainer. But make sure that people with disabilities and users of assistive technologies are using your product, because they'll tell you. And make sure that they're part of the testing process. And if you don't have people with accessibility knowledge as part of the testing process, pick up a screen reader and learn how to use it. One of the biggest faux pas is, as Brett kind of touched on that I encounter often, is that notion that screen reader users just use tab, and it is extremely not the case.

BRETT:

A lot of developers, and they don't even think about this, I mean, there's the tabbing problem, but a lot of them maybe use a Mac and they say, "Oh, there's voiceover. This is great. I can run it on a webpage." Now I whip out my mouse and I slide down the page and I click on that button and see it says button, or I navigate somewhere completely using the mouse and click on it and everything's great. Or they'll do something like use an accessibility checking tool to run a page analysis and go, "Hey, this is a great report." And it says something like this doesn't have a label or whatever. Everything should have a description.

So I mean, you were talking about the close button. My favorite along those lines is, and I can't remember the name of which company was doing this, but they had a requirement that everything had help text associated with it. So you'd tab to the close button and not only would it, I think they got it right, so it did say just close button once, but then it would announce after a pause the description, "Pressing on this button will cause this dialogue to close." Every single time.

OLEG:

Wow. That's like our [Tutor Messages in JAWS, but you can turn them off.

BRETT:

But they would do it themselves. Yeah, exactly.

OLEG:

Turning to another side of this, because we've been looking at it from the vantage point of a web developer. Turning to the screen readers, one thing I've heard is, on the web, or in web applications, JAWS is not really following what's in the code, but it's doing too much interpretation. Is this true? And if so, why?

BRETT:

The short answer to your question is yes, we do interpretation. And we do that for I think a very good reason, and that's that I think our focus has largely been about making the user experience the best it can be. ARIA has really only been a thing for the last slightly more than 15 years. And prior to that, we were really forced to make decisions. And in large part, the criticism that mostly comes up is about guessing what labels are for controls. And we would historically say something like, if there's a label adjacent to something, let's call that a label. I mean, there's ways to make that connection in HTML and there's ways to make that connection in ARIA, but there were also cases where there just weren't labels for things, and we would try to guess how to do that. And so from a user experience point of view, we'd often get it right, that we'd get the control name and so on for labels that weren't necessarily directly associated. This does not match what should happen all the time in ARIA. And I think there is growing pressure now for us to do some of those things in a more standards compliant way, because otherwise users get a different experience with different screen readers.

TJ:

And I think that right now, because that is another thing that I tell clients often, is JAWS is built for the consumer and JAWS is going to try and do its best to make those determinations. Yes, your form field does appear to have a label. No, it actually doesn't have one. And that's a JAWS thing. And it takes some explanation. And to kind of address one of the things from your early question, JAWS does actually take everything from the web page into its virtual cursor. Sometimes things that the web developer doesn't want it to. Things that were hidden from visual users sometimes are not hidden from screen readers, and vice versa.

OLEG:

That would be a bug, wouldn't it?

TJ:

Well… We recommend screen reader-only text from time to time. A good example is a search button. If there is text off-screen that's associated with that button that says search, JAWS isn't going to be able to read the magnifying glass button, because somebody as a new user may not understand, especially if they've never had vision. Up until I started testing, I didn't even know that there were icons for all of these things. I just thought they were labeled with text. And so what they may do is they may actually hide the magnifying glass from JAWS users and have a text that is on the screen or not on the screen. It could be the exact color as the background. It could be moved off by 10,000 pixels, so it doesn't show up on the screen. But there's nothing that actually hides it from JAWS that labels that button. And so sometimes you might want text that's not visible.

BRETT:

The other case for this is, just to loop back a little bit to what I said about ARIA earlier, is that it's not perceivable by people who aren't using assistive technology. So when you say something's not on the screen, I mean, a search button is a great example, right? I could call it anything in ARIA, and you're never going to necessarily see that text on the screen anywhere as a sighted user. And so a lot of what ARIA is all about is presenting information that's not visually on the screen. I mean, that's what we do exactly. You can even assign rolled descriptions to things, for example. I do not like them and I'd prefer it if people didn't do that, but if you want to call it the slice of pizza control, you're more than welcome to do that in ARIA.

TJ:

Please don't.

BRETT:

Yes, please don't.

OLEG:

So TJ, you've been talking about working with clients and helping them to improve their websites in accordance with accessibility practices and regulations and everything. But then would there be situations when we, or you guys as JAWS developers, would take the lead? So this site or this resource is technically not quite accessible, but the resource is so important that we have to work around the accessibility issues in order to give the users the best experience. Is that something that ever happens? If so, is that common? Basically what I'm asking is what is the relationship between the accessibility tester and website developer on the one hand, and a screen reader developer on the other hand in providing accessibility?

BRETT:

As I said earlier, one of our focuses has really been on making something work for the consumer. And a lot of times, I mean, it depends a little bit on where we are and who the customer is and some other things. But I remember, and this was about 10 years ago now, we had, I think it was maybe the German government or something had a case where they wanted alt text on a frame for some reason. They wanted a way to provide information. And technically that's a violation of an HTML standard. I mean, alt text is not something that should be supported on a frame. But because we're talking about the German government, and we had difficulty trying to get them to make a change, it was one of those things that we supported at the time. So we would do some exceptions if we can get information to try and make something accessible, depending on the size of the website, those kinds of things. I hate doing that because it makes a lot of JAWS internal code much more complicated. And a lot of the time the real source of the issue should be just better web design, but there's always this trade-offs. I mean, I think customer focus is what we've really emphasized, and so trying to make something work for our users has been something that we've been willing to do for better or worse sometimes, I guess.

OLEG:

What do you do if you see web support backsliding? Now, case in point, a website that worked just nice, no problems. And then after a certain JAWS update, something bad happens, like you may be reading Wikipedia and for some odd reason, although it never happened before, when you're going over a link, it will read to you the link and also the source of the link or the description of the link, which will result in double speaking. Are these fair cases to go to Vispero's tech support and report that, or do you go to website developers?

BRETT:

So you framed this question in an interesting way, and what I mean by that is if it happens after a JAWS update, this is most likely something that definitely needs to come to Vispero tech support. The Wikipedia one is a particularly interesting one, because what we actually did in this particular case was to start announcing descriptions from more controls on the web, links being one of them. And as it turns out, what Wikipedia had done all along but only really got exposed with the JAWS update is, they had added the name and description in as duplicates quite often. Sometimes they were close duplicates, so they would have a description, but they'd put a bullet or something at the beginning of the description, so we'd go, "The text doesn't match." But yeah, this is something that often can come directly to Vispero tech support. Even if a website changes and JAWS doesn't speak it well, sometimes Vispero tech support is a good first line of defense. I mean, it's hard to say to a user, look, something broke and it's either Windows or your screen reader or the web browser. I mean, I feel the user's pain here. I don't want to pass the buck and say, oh, this isn't us. And so sometimes I think we really do, I mean tech support I think does a great job of trying to help figure out who these should go to. But also when it doesn't, sometimes those get to be tech support requests and they help us. I mean, we as engineers can look at it and say, "This is probably a website issue, but let's report it to the website concerned and see here's a suggestion on how it could be fixed." TPGi might be your best friend in this case as a website.

OLEG:

And with this description stuff in Wikipedia, yeah, Wikipedia has been doing it for a long time, but actually the solution for this I guess has been for a long time in JAWS. Because in web verbosity settings, you can configure each individual level, and one of the checkboxes is announced descriptions, so you could check or uncheck that for the level you desire. But again, this is so deep in the hierarchy of setting center that quite a number of users would never even reach there.

BRETT:

That's right. And just to throw in a shameless plug for yet another JAWS feature, the FSCompanion can often answer some of these questions, so it's a good resource.

OLEG:

When you do want to reach out to a website, it doesn't work with JAWS, it doesn't work with any other screen reader, actually, in many cases today, one thing you could use would be Picture Smart, like for unlabeled buttons, but you've exhausted those things. Is there a tip, or are there any tips, that you could give about reporting accessibility issue? Some developers will say, "Please provide the screenshots." And I know many blind people who are really frustrated by this.

TJ:

I wrote a blog article for TPGi on how Picture Smart has really benefited me as an accessibility tester. And so if anybody's curious from a perspective, that's out there on our blog at TPGi.

OLEG:

Actually a fantastic article. Yeah.

TJ:

Thank you. One of the things that I would also say though is, again, Picture Smart. In this case, a website developer wants a screenshot, so use Picture Smart and ask it very detailed instructions about what's in the screenshot that you took. Because there are ways of getting screenshots. They're not going to be perfect. Like for the snipping tool that's in Windows for example, we're not going to be able to cut things out. So I would obviously go through with the caveat of make sure it's something that you're comfortable potentially being in there. But if you're able to, take advantage of the tools that JAWS provides. OCR is another one, if you're looking for text just to make sure that you've got the screenshot. And at least send it along and say, "I tried." Another thing that I would do is, and this is something that I learned doing my job, is be super explicit in detail with your bug report. Write it out in steps.

One of the methods that I like to use is: let's take a button that's not being announced as a button for example, and if you don't know ARIA or other lingo, you could just go and say, "Hey, I have a bug report and there's a button that's not being announced as a button. I'm a screen reader user. Using AI, I was able to determine that it is a button with a magnifying glass icon. I think I gave you a screenshot, it's attached. Here's the steps that I took to activate it." Number one, turn on JAWS. Number two, make sure that you're on X page of the website in a logged in or logged out state, if that matters. Make sure to explicitly say that. Three, give them explicit details. It's in the header region. Or some users may say, "Well, it's in the banner region." So using JAWS, go to the website, press R until you hear banner, press down arrow three times, and you will land on this unlabeled button. The expected behavior is that it has a label so that I know what it does. The current behavior is it says button.

Another thing that you could do is fire up Zoom and share screen, and record yourself doing it, and send that along instead of a screenshot. Because again, if you say, "Hey, this button that's in the header doesn't work and it's unlabeled," they're going to look at it and say, "I see a magnifying glass. It looks labeled to me." Or things like that. And so you have to be explicitly detailed. The more detailed, the more time it's going to take, obviously, but the more ambiguity you take out of the situation.

BRETT:

I mean, TJ said, "Give explicit steps." And people try, but sometimes they don't even think about the things they're doing. I mean, he said, "Press the down arrow key three times," which is great. That's very explicit and we now know that it's the virtual buffer and a whole bunch of other things. But just having a video sometimes you can see that really what the person did is use R, move to the banner region, and then tabbed. Or they did a say line versus arrowing down, moving by character. I mean, so many subtle things can get captured in a video that I really just want to second, third, whatever that is, please send videos.

TJ:

It's a good reminder to myself that even though I say be explicit, like I'm not being explicit enough and a really good commitment to sharing videos. And clients will ask for videos all the time.

OLEG:

And another tip. That should not be, but sometimes that is so, when you say I'm tabbing or shift tabbing through the page, and my question is, "Okay, is your virtual cursor on or off?"

TJ:

It does matter.

OLEG:

Technically it should not.

BRETT:

Yes, technically it should not, but it does in some cases. Often that may be a bug, is that we need to make sure that information matches as much as possible.

TJ:

A fun fact with the virtual cursor off, if something has an “ARIA-hidden” attribute on the page and you tab to it, JAWS will still read it.  My point was to try to illustrate that having the virtual cursor off, that's one of those things, right? Because the virtual cursor on, it'll just go right past it, and so that's one of those reasons that it does matter. Also, it's a testing tool that I use, is to turn the virtual cursor off to try to catch focusable elements in ARIA.

OLEG:

Okay. So TJ, can I teach myself accessibility testing or do I need a formal degree for that?

TJ:

I think you can. My background was in technical communication, and through technical writing I learned. Because they had courses for online document design, and some of that came down to learning HTML. So I think that especially if you are a blind individual looking into getting accessibility testing, I think that a degree in software will help, but I don't think it's necessary. Using assistive technology will take you a long way. And I think it's just like anything else, if you're willing to put in the work, if you're willing to put in the experience, you can learn this stuff. I think that it does take somebody that is willing to dive into things and say, "Okay, well, why doesn't this work?" And put in a lot of research and a lot of time into learning.

There are certifications out there that you can acquire. I'm a certified professional in web accessibility from the International Association of Accessibility Professionals. Not necessary, but that did give me the tools to get in the mindset of what the accessibility industry is trying to teach. But if you wanted to go into software, which I think that if I was to do this all over and realize that accessibility was an amazing thing, I would've gotten a degree in computer science or something to make myself have the intimate developer knowledge that sometimes can make a difference.

OLEG:

Well, Brett, TJ, thank you for being with us, and thank you for sharing your knowledge and expertise and your heart with us, and thanks for doing what you are doing.

BRETT:

Thank you very much, Oleg. This was great.

TJ:

This was a lot of fun. Thank you. It was amazing.

 

 

JAWS for Windows 30th Anniversary Bit

SKY MANDEL:

Hello. This is Sky Mandel, and I am recording a segment for the Freedom Scientific JAWS 30th anniversary. Jaws for Windows has helped me attend school, go to university, and allowed me to use the computer. It has helped me do amazing things over the years. It's even helped me find a friend of mine who is a musician like me, named Holly Karma. Congratulations to Freedom Scientific for 30 years of this screen reader. And cheers and best of luck to you guys in the future.

OLEG:

Thank you, Sky. And if you would like to share a JAWS 30th anniversary bit or contribute a JAWS power tip, the address to write to is fscast@vispero.com.

 

Talking AI, with Zülal Tannur

So next on FSCast, we hear from Zülal Tannur, a blind entrepreneur and developer from Turkey. Hello, Zulal, and welcome to the show.

ZÜLAL TANNUR:

Thank you so much for your invitation again.

OLEG:

So talk to me about yourself, just as a short introduction.

Zülal:

First of all, I was born with limited sight, so I learned how to see on the computer screen just by looking at small images of things, such as human faces, colors, objects, and many more. By doing this, when I faced with them, I learned how to recognize them, what they were, their shapes, colors, and different aspects.

OLEG:

So that was before you went to school, right? That was at home?

ZÜLAL:

Yeah. I was just four years old.

OLEG:

Was that your idea to start exploring the computer screen, or were your parents actually guiding you in that direction?

ZÜLAL:

Actually, we visited a home, and in there I saw directly a monitor. And I realized that when I put something under in it, it directly made them much more bigger. By doing this, I could recognize the shape of the objects much more efficient. After this, I talked to my parents to persuade them to get me a computer, because my father is an engineer, and also we had a laptop in our house. But it was a bit small, so I couldn't, for example, recognize the images when I looked at the screen of it. So in order to provide me a monitor, they purchased me a computer, and we started to upload the images of my friends, teachers, my family, directly into the computer together. And they told me, for example, yes, it was my parent, it was my father and my mother and also my sister, my friends, I learned their faces personally. And then I discovered internet of course, because it had much more information and also images. So before learning how to read and write, I solved how to use internet in order to provide a dataset for myself. It's kind of a natural machine learning system that I can explain.

OLEG:

And I understand you learned to read and write with a computer as well. Were you using magnification built into Windows or Mac, or how was that done?

ZÜLAL:

It was a Windows computer, actually. It was a desktop. The computer was the first thing that I learned how to use. I was just four years old, four and a half, and I realized that the world is much bigger than whatever I saw. And also we discovered that thanks to the scalability of our brains actually, my ability to see started to extend itself. And also it directly reflected in my, for example, paintings, by the way. They have started to earn perspectives and many more things. And suddenly I lost my vision when I was just 10 years old, and I realized that if I wanted to use my computer again, I needed to use a screen reader. And of course, I came across JAWS. And in the first week that I lost my vision, I had started to use JAWS, and of course, to search on the internet again.

OLEG:

Were you in school? Was that a regular school or was that a school for the blind?

ZÜLAL:

I was in the regular school with the other students as well.

OLEG:

How was that working? Is that normal in Turkey, like mainstreaming blind or vision impaired children with everyone else?

ZÜLAL:

It was a bit challenging situation, because we went to so many different type of schools, and also none of them would admit me, because I had the high potential, but I was a low-vision child and I was just seven years old. So they couldn't, for example, trust themselves how to educate me. And of course, they rejected me first of all. And we really went to all of the schools in Istanbul. And again, we have started to return back to the beginning, to the schools again and again. And one of them finally accepted me. So I started in school as a regular student in a regular class. Yes, I heard sometimes from my teacher that she wanted me to go to a blind school or something like that, but we have never wanted to do this, and I completed my education in a regular school.

OLEG:

For someone who is seven years old and wants to go to a regular school and is rejected, it must be a hard and difficult feeling. Were you ever feeling sad or burdened and really upset by this?

ZÜLAL:

Well, first of all, it's kind of a disappointing situation, but I feel that wherever I go to introduce myself or something like that, I need to prove my abilities. For example, yes, I was talking to teachers in the schools and I was trying to prove my abilities to them. And also, I had decided to ask the question to them. Normally, they were asking me to prove my abilities, but in that time I asked them what they could offer me. It is kind of a game-changer question, because I directly told them I didn't have to prove my abilities, being as a seven years old. And I directly asked them, "What did you offer me being as a high potential student, being as a different student actually." I was aware that I really required to have a special education, but also I was aware that I really required to have equal opportunities with the others.

OLEG:

Yeah, absolutely. What other assistive technologies were you using then in your school years?

ZÜLAL:

There was a lack of assistive technologies, but I was using, of course, the screen readers, computers. It was amazing. JAWS was also the first of them. And then also I learned, for example, the Office programs, how to use them, especially for my education. And when I was 17 years old, I discovered, first of all, seeing AI, and then Be My Eyes, and also the other technologies, image processing technologies as well I have started to use. And then I met with WeWALK team, the first smart white cane of the world, and I started to take responsibilities as a manager. And also, I used so many different assistive technologies in my education life, especially in my high school years.

OLEG:

Many of us are fascinated by AI, but you went deeper than that. You went on to learn about AI and to work in this field. How did this happen?

ZÜLAL:

First of all, I've started to use image processing technologies, of course, seeing AI and then other technologies as well. And I realized the gap between biological and image processing technologies, image processing vision. Because, for example, I experienced how to teach to have a vision to my brain. And also I was aware that recognizing something is kind of a scalable thing, because your brain can learn, can modify itself, you can make some decisions or something like this. But when we turned to image processing technologies, they were especially generic. So I couldn't modify them, I couldn't personalize them according to myself, my personal choices or many more. So I started to build my own technologies. First of all, thanks to my achievements, Microsoft gave me a Leading Woman in Technology title. It was kind of an award, I can say. And then I was educated by Microsoft, being as a first visually impaired technology ambassador of the world. And then I directly gained a title from Google, which is Woman TechMaker.

Thanks to all of the titles and the responsibilities, I learned how to build, how to create a product, a solution, how to create an impact or a community or something like this. After gaining these abilities, after especially getting the leadership skills, I started to, first of all, establish my first company, From Your Eyes, in 2023. And then I established my second company, which is the parent company right now, NeuroVision AI Tech, which is in USA right now. Especially we are developing artificial vision technologies for closing the gap between biological and artificial vision. We are not just developing for visually impaired people. Rather,  we are developing for artificial vision. And artificial vision is required for both machines and humans.

OLEG:

The market of AI technologies, including AI vision, is now pretty saturated. There are numerous players in the field. Where is your company's uniqueness? What is there that you doing or trying to do that in your mind does not receive the attention it deserves from others?

ZÜLAL:

Yeah. Especially right now we have multi-models, such as GPT, Gemini, DeepSeek, and also maybe in the upcoming years we are going to have more. But due to their computational costs and also sizes, they have some limitations to customizing and localizing their technologies. For example, when we look at multi-models such as GPT: we can integrate GPT into driving systems. But for customizing it according to the requirements of the visually impaired people, we have some limitations to fine-tune the AI models itself because it is pretty big. So they are highly dependent on the cloud, which means that data privacy and security is crucial. And also, when we look at existing regulations, we see the differentiated approach to those kind of rules in many different areas, in many different countries. So there is a demand for our specialized solutions, like From Your eyes and NeuroVision AI, because of what we are doing to customize and localize our services.

For example, when we talk about our vehicle vision solution,: apparently, 95% of visually impaired people don't feel comfortable during a smart vehicle journey, because still we cannot see the environment. We are able to use GPS technologies, but it is the only thing that we can do in the smart vehicle system. Let's think about the autonomous driving systems. Again, 75% of visually impaired people don't want to use autonomous driving systems in the near future. Again, due to lack of assistive technology in this systems.

A couple months ago, I was in San Francisco and I was using Waymo, the autonomous driving system of Google, and it suddenly stopped. But I didn't know about what was going on around myself. For example, was there an accident? Was there a red light or something? Yeah, there wasn't an accident. It directly saw a person on the pathway, so it stopped, but I didn't have the information. What we're doing is, we are customizing our AI models for autonomous driving systems and the smart vehicle by prioritizing the user experience first, comfort and security. We directly integrate our software into the operating system of the cars for reaching out internal and external cameras, getting the raw data from the environment, directly sending to our AI models, merging with our own evidence, and giving an output. We can describe the environment, we can assist the passengers, we can merge with all other data by pulling the other components of the vehicles for assistance.

At the same time, with the federated learning methodology, we aim to use vehicles as a mobile data lab. Because, especially for traditional systems, if you are using clouds, it is kind of a problem to collect the data. But if you are deploying federated learning mechanism is kind of a new terminology for a technology world, which means that you can process the data on local devices, such as cameras, embedded systems, or many more. At the same time, you can also keep going on to train your AI models with federated learning mechanisms, and there are several companies who can achieve it, who can commercialize, and we want to be one of them who are using accessibility.

OLEG:

Now, I understand that you are the CEO of these companies. Do you do any development or is your role strictly business now?

ZÜLAL:

No, I'm doing development at the same time. Because if you're a founder actually, especially if you're a sole founder as well as me, you are the CEO and CTO at the same time, because we are just four people, which means that we do everything together. And also, I'm a technical founder, so it's very important for us. I hope to delegate my responsibilities, because I'm really doing innovation development. It's kind of an important thing because you can, for example, outsource sales, maybe business strategy or other things actually, but you cannot outsource innovation, vision, as many more.

I can tell that if you are not a programmer – I wasn't by the way, at the beginning of the story – it doesn't mean that you cannot do something for technology, or you cannot be a technical founder, or something like that.  Again, you can be, because when we look at the new CTO approach or new product manager approach, it doesn't mean coding everything. It means managing technology, directing technology, having the vision, and directly giving a scope to your team, so we require to have CTOs, product managers, testers, or many more. So if you are not a programmer or something like that, it is not the end of everything, it's just the beginning. Yeah, you can learn. I think you need to learn. But it is not the most important thing, I think.

OLEG:

Zülal, thank you for giving us the time and thank you for this interview. And I hope you reach your goals. But when those goals are reached, there'll be new ones.

ZÜLAL:

Yeah, of course. Thanks so much for your invitation and those great questions again.

 

Conclusion

OLEG:

This was Zülal Tannur from Turkey, and this brings us to the end of FSCast for February, 2025. Just a reminder that you can reach us by writing to fscast@vispero.com, or for training issues, write to training@vispero.com. And next month we're looking forward to our JAWS 30th anniversary and CSUN special. And until then, I'm Oleg Shevkun, on behalf of our entire team here at Vispero, wishing you all the best.