FSCast #261.

August, 2025

 

“We've been excited about it. We're trying to excite everyone else.” )(Rusty Kelly).

 

“It's like going to the color high-res screen for a sighted person.” (Joe Stephen.)

 

“August was a really busy, fruitful month” (Elizabeth Whitaker).

 

“People are searching for data” (Stephanie Enyart).

 

“Please say that you heard about the study via FSCast” )Arielle Silverman).

 

Introduction

OLEG SHEVKUN:

Wow, what a lineup. Let me unpack some of this for you. First of all, hello and welcome to FSCast Episode 261 for August 2025. This month we are faced with a bit of a challenge. In September we're scheduled to release our updates for JAWS, Zoomtext and Fusion. One of the central features of those updates is multi-line braille support. As of September, it is going to be available for users of the Monarch graphical Braille display from Humanware and the APH, as well as for users of the DotPad, from Dot Inc.

 

We could have waited to talk about this support until our September episode, but we thought this would be too much of a wait, especially because we would like to give you some time to prepare for what's going to be in the updates. Therefore, we've asked Joe Stephen and Rusty Kelly to join us today to give us a preview of this new feature. This is some pretty exciting stuff and there is a lot to explore.

Talking about exciting stuff, artificial intelligence is still a hot subject, and I'm pretty sure is going to be a hot subject for years to come. American Foundation for the Blind is conducting a study of how AI is used in real life, and we'll be hearing more about that study later in this episode.

It seems like our training department here at Vispero never goes on vacation, so of course we're going to be hearing from Elizabeth Whitaker about some of our exciting training developments. If you've missed a webinar, or if you would just like to hear it again, that's not a problem at all, because our webinars are archived and can be played on demand.

 

JAWS Power Tip Of the Month

But before we go any further, I do have something that has been sitting on my desk for a couple of months now, and that's our JAWS power tip of the month. This time it comes from Gary Hammett. That tip has to do with a feature that has been available in JAWS for, well, I should say like 25 plus years, but it is just as useful today.

Gary Hammett writes:

“I have a JAWS Power Tip. This may be useful for people, who want to edit the description of how punctuation is spoken e.g. a user in the UK might want the synthesizer to announce 'period' as 'full stop'.

To do this:

  1. Open the JAWS Settings Center.
  2. Press CTRL+SHIFT+D to move to the default configuration.
  3. Type 'punctuation' into the search box and use the DOWNARROW key to navigate through the results to the 'customize punctuation' button and press space to open.
  4. Once in the dialog, press the punctuation mark that you want to change the description for and tab to the 'edit description' button.
  5. Press SPACE to open the 'edit description' dialog, and focus will put you in the description edit box.
  6. Delete the text that is currently in the box and edit it with how you want the punctuation mark to be announced in the future, ensuring that the punctuation mark is inserted at the end of the text
  7. Press ENTER, and the new punctuation description will be active.
  8. Tab to the OK button to close the 'customize punctuation' dialog.
  9. Tab to the OK button to save settings and close Settings Center.

 

I'd like to thank Gary Hammett for this power tip. As our way of saying thank you, we'll be extending Gary's JAWS license for another year.

 

An In-Depth Look At Multiline Braille

One of the major innovations coming up in our September 2025 product update is the support for multi-line braille. To talk about this, I'm joined here in our Virtual Studio by Joe Stephen, our senior developer responsible for implementing this feature. Hello, Joe.

JOE STEPHEN:

Good morning.

OLEG:

I'm also joined by Rusty Kelly, a product test engineer with decades of experience and one of our top specialists in all things braille. Hi, Rusty.

RUSTY KELLY:

Good afternoon, in my case.

OLEG:

It's an interesting setup right now. Joe is in Tasmania, and some of our listeners may be thinking where in the world Tasmania is.

JOE:

It's about 500 kilometers south of the mainland of Australia. Yes, we are our own island covered by jungle. Well, we call it forest and it's cold at the moment.

OLEG:

What do you mean cold?

JOE:

Well, I just came from Malaysia where it's sort of mid-thirties, I guess near 100 Fahrenheit and here it's kind of like six or seven and even colder at the moment.

OLEG:

Celsius, right?

JOE:

Celsius. Which is getting down there.

OLEG:

Right, like upper thirties Fahrenheit.

RUSTY:

My kind of weather.

OLEG:

Rusty is in Florida, right?

RUSTY:

Yeah. I'm pretty much in the area where the office is. I'm here in Holiday, Florida.

OLEG:

And I am in Marburg, Germany. So the studio is spread out all over the world.

We are celebrating 30 years of JAWS for Windows. For you, gentlemen, you've been with a company, or with companies because it's changed over the years, for quite a while. Joe, when did you start?

JOE:

I started part-time scripting in 1996, and I went full-time in about 1999, so nearly 30 years.

OLEG:

I remember reading the JAWS users list at that time on the internet, and I'm seeing this young man from Australia who is answering questions and knows everything about JAWS and that was Joe, of all people.

JOE:

It's funny, I remember you too.

OLEG:

We go back quite a while. Rusty again, for old timers, those of us who used to have JAWS CDs and DVDs, I think prior to the voicing installation there was Rusty Kelly's voice.

JOE:

Oh, yes. And that was so distinct, wasn't it?

RUSTY:

Welcome to JAWS.

OLEG:

So, to multi-line braille. A couple of months ago, I was talking to a customer of ours, and he asked me this question, he says, "Why are you making this big fuss about multi-line braille support?" "That should be quite easy to do," he said. I go, "Well, what do you mean?" He says, "Well, okay, here's the display, the Monarch that has 32 characters, 10 lines. 10 times 32 makes 320. All you have to do," he says, "is to take a display driver and to tell it that you've got a 320 characters braille display, and lo and behold you've got multi-line support. You can read the text, and it wraps, and then you move to the next display, and you've got another 320 cells."

Quite frankly, I didn't know what to respond, because I've seen this being developed for quite a few years, and Rusty and Joe were working on this, not just seeing it. How would you respond though to a comment like this?

JOE:

It's a really good question, but it's very, very oversimplified. For a start, I don't think a blind person who hasn't felt multi-line refreshable display, I don't think you can imagine what it is like. For me it is the biggest thing in braille for 50 years. It is life-changing, it is massive, and the rest of this interview will bear it out as to why, but it is not simply taking a display and making it longer. For a start, the display won't automatically wrap text. Secondly, it won't start at your cursor position and know where to stop filling the display. You see, there's a lot of mechanics behind how we gather the text. For instance, you can't just screen-scrape to get text. Every application has a different technology, or there are several underlying technologies, which have to be present to be able to even gather more than the current line or the current paragraph.

It's a major technological advancement to be able to gather, using various technologies, as much text as will fit on the display, know where to wrap it, and to also make it so that when you pan, it knows exactly where to continue from at the bottom of the display. It's even more than that. You have, in applications like Excel, the ability now to vertically align numeric data, and as you arrow across the spreadsheet, see multiple columns and multiple rows, and still keep the coordinates on there with column indicators between the columns or leader dots, if you want, where you've right aligned numbers, like a table of contents, so that you can easily track. Then split the braille display when you need to show formulae and comments. It is phenomenally complex. We have to think from the ground up.

This has never been done before with braille. So it's not just a matter of well let's just send text to the driver and see what happens. It just doesn't work like that. For example, if we didn't have all our technology in place to gather text, what would you see? You'd see one line at the cursor wrapped to maybe three lines of the display or two and a half lines of the display, and that would be it, if that's all we did. Because that's all JAWS used to gather. It used to just gather the line at the cursor. Just wrapping that one line wouldn't even fill 10 lines or eight lines, or however many lines the multi-line display has. So that's the first answer.

OLEG:

What you're saying is, you had to redesign the backbone of how JAWS is gathering information from the applications. You could not just rely on the foundation that was built before, you had to tackle these new challenges.

JOE:

Correct. Many of these new challenges haven't been done before. So for instance, on the web we used to have a structured mode representation where you'd show LNK after a link. Well, that was only available for the focused link. If you just showed lines around that, they would be shown in line mode, because there was no mechanism in place to be able to go through the page and build a structured line of every element on the page. There were lots and lots of challenges. I'm sure there are still things that we can do to optimize the use of multiple lines of the display.

This is what I tell a sighted person. I say, "Go back many years to the time when we didn't have a color display. In fact you had a tiny little, not even an LCD, 20 characters, can you read efficiently? Can you read the context of that 20 characters even on the same line without moving the display?" You see, when you've got 20 characters shown, you've got to memorize a lot more. Imagine working on a math problem, you can't refer to the line before without arrowing back and then having to arrow back to where you were.

I mean we tried to address this a little bit with Split Braille, which went quite a way for a single line display, but sighted people who have a full screen can look back three lines, or look down three lines. They can see how text vertically associates, like in columns, like in Excel. You can add up numbers by feeling the units column, then the tens column, then the hundreds column. It is phenomenally better. Someone said to me after seeing multi-lined braille, they said to me, "Why would you ever go back to single line?" That's my thought also, but as I said, unless you've experienced it, you cannot know what you do not know.

RUSTY:

It's like you can't miss what you don't have. It's a game changer. It's like Joe said, the single largest advancement in braille in 50 years, as far as refreshable braille is concerned, because you can actually look at eight lines of braille at the same time. You can see lines where the cursor isn't located. If there's a link on that line, you'll still see LNK. I can't even really explain it well enough, but it is so revolutionary that the only way you're really going to experience it is to touch it yourself.

A table is a really good example. When I first saw a table aligned in a tabular fashion where you could actually see the data lined up, there are column separators that separate one column from the other, all the text is right there. You can just use your finger and just scan up and down the lines. If you need to, you can pan, and it pans to the other part of the table. It's just incredible. Going back to a single line braille display to me now it's clunky.

JOE:

Just on that point, multi-line braille has several modes. We were talking about how we wrap text and move down and gather the next text until the braille display is full. That's in wrapped mode. We also have what's called cropped mode, which is where the display shows a window into the text on the screen, such as in Excel. For instance, at the beginning if you've panned all the way to the left, you'll see columns one, two and part of three. Then if you pan right, you'll see from the beginning of column three across the column say five.

So basically we don't wrap the lines, we show the first 32 characters, the next 32, the next 32. It's not exactly like that because of contracting the braille and things like that, but basically you'll show a window into the text, as opposed to wrapping a line or a paragraph, I should say, or multiple paragraphs across the display.

We also have to be able to handle contracted braille. When you contract braille, it no longer takes up the same space as one-for-one computer braille. Then again, you have to be able to compensate well how much you show on the display. So there are a lot of complexities with multi-line braille.

OLEG:

Would cropped mode also work in a Word document, where you can see the layout, the margin, the tabs and everything else?

JOE:

Yeah, you can. You can change from wrapped mode in Word to cropped mode, and then you can feel how text is centered, or a bullet indentation and things like that. That is correct. The other amazing thing you can do is in Notepad. Say you open a CSV file, which is a comma separated values file in Notepad. It'll look exactly like in Excel, where it will automatically vertically align numerical data and separate it, so that instead of it all being one just jumbled mess of numbers with commas, it'll look perfectly aligned tabular data.

OLEG:

So you're going to see the layout, you're going to see the relationship between the various columns, like various pieces of data right there.

JOE:

Yes. We process it on the fly. Let me give you an example. Say you have a Notepad file and the first row's got like five, 10, 20, 30, and the next line's got 1,000, 2,000, 3,000, 4,000. Well, the five is going to be lined up with a zero of the thousand. The commas will be lined up. You'll have enough space between the columns. The next column was 20 and 2,000, so the two zero will be lined up with the last two zeros of the 2,000. What I'm saying is it automatically formats, as well as translates, as well as vertically aligned, as well as wraps at the right places, if you're in wrapped mode. There's a lot of processing going on without much more overhead.

OLEG:

Is that helpful in programming, because sometimes you use things like tabs to indent or out dent parts of the code. We used to do it with speech like a tab, tab and so on. Does that make things easier on the braille display?

JOE:

Yes, it does, especially with programming languages like Python. I mean it depends on your editor, of course, but yes, you can tell which character to vertically align to, for instance. In Notepad with a CSV, you might tell it to align to a comma. It's also smart enough to figure out if it's numerical data, you want to right a line text in the column. If it's text data, you probably want to left a line in the column, so it's even smart enough to figure that out.

OLEG:

You gave me examples in Excel. What if I use another spreadsheet software? Or you gave me examples in Microsoft Word, but then what if I'm using Google Docs or LibreOffice?

JOE:

I’ll go back to the Word example and the Google Docs example, say you are outside a table, your cursor is outside a table, we are going to wrap the text automatically and you'll just see pages of text as you'd expect. As soon as you arrow into a table, it will automatically switch to cropped mode and automatically vertically align tabular data. It'll automatically figure out with if this column is all text, it'll left align it within the column. If this column is all numbers, it'll right align it within the column. Then as you arrow out of the table it will go back to wrapped mode automatically.

OLEG:

So I don't need to switch modes?

RUSTY:

No, it's completely seamless.

OLEG:

Because in Excel I don't really need wrapped mode, the default should be cropped, and that makes every sense in the world.

JOE:

Yes. In fact in Excel you won't even see the wrapped mode option because it's totally irrelevant. What you'll see in Excel are things like do you want the column vertical line indicators between the columns, or do you want the dot leader before the cells. Where you want the split data? For instance, when you split data now, so you can still split data with a multi-line display. We had buffered mode before, and maybe many users don't know what buffered mode is, but it's a very powerful feature where you can freeze, if you like, a paragraph or portion of a document in Word or on the web for comparison, you can freeze that and put that on the display somewhere and have the live text in the other part of the display. Well, now with the multi-line display, you can specify how many lines you want for the split data. You might have the active part of the display showing the first four lines and the split data showing the last four lines. You can have the split data at the top or the bottom of the display. There's a lot more flexibility in even the split modes within Word, where, if you move to an annotation or a comment, the comment will show at the bottom of the multi-line display automatically as you arrow through the text. Then when you come out of the commented text, it'll fill the display again with regular text from the cursor.

RUSTY:

There's a couple of other areas where this really shines. For example, you know how JAWS can show attributes like bold, italic, underline, normal. What you see is you'll see the text, and then on the line directly below that text you can see the attributes that are assigned to each character. If you have a line that has mixed bold and normal text, you can tell what part of it is bold, what part of it's normal, and it rotates just like it would if you had say bold and underlined. I mean it's pretty incredible. It's better because now you can see the text, and directly below it you can see the attribute of that text.

JOE:

Just to clarify, so if you've got your finger on the H of hello, you move it straight down to the line below it, you'll see B for bold. You put your finger on the O of hello and move straight down, you see the B for bold. You move over to the next word there, put your finger on the T, move it straight down and you'll see N for normal. It's no longer bold. You see the relative position and the attribute of the actual text without having to remember. With the old attribute mode, you kind of had to switch between them.

OLEG:

Also, you mentioned the web, and you mentioned links. Rusty, give me a use scenario for multi-line braille on a website.

RUSTY:

So especially with smart navigation, you might have a line that has three or four links on the line. Well, because of structured braille, you can only see one link at a time. You had to actually arrow across the link to have the other one appear. Now, since we have the new structured mode, you can see all the links that will fit on the display that are on that line. You can just use your cursor router and click whichever link you want.

JOE:

Even more than that, you see the heading above the links, you'll see the links, you see the text below the links, whatever text will fit on the display.

OLEG:

How do you click though?

JOE:

Yeah. Okay. Cursor routing, particularly on the Monarch for instance, what you do is you touch a particular cell and then you press the action key, which is the key between the left three and the right three dots of your braille input. The cursor will jump straight there. Or if you're on a link, it'll activate the link, if you're clicking a link, you're touching a link.

OLEG:

So basically by touching and pressing the action button, you can simulate a click.

JOE:

Yes. There are two modes with the Monarch. You can turn off the requirement for the action button. If you just hold your finger on a cell and release, you can route, but it is a lot more sensitive and problematic for accidental routing. The default is to touch the cell and then press the action key.

OLEG:

Is multi-line braille helpful in navigating interfaces? For example, you've got Settings Center in JAWS, or some window where you set up some options. Is that going to be multi-line as well?

JOE:

That's a really good question. At the moment, structured mode makes much better use of your multi-line display. Line mode will still try and gather text the way it is in documents, which isn't that useful in UI. We don't use line mode in dialogues, at least in the US. In structured mode you'll see the prompt type text, the description of the dialogue, the dialogue title, the type of the dialogue, etc., all on the lines. We will break at component boundaries. We'll put as many components as will fit on a line, and then if the next component is going to be wrapped, that whole component will be moved to the next line. By a component, what I mean is; name, type, value, position, dialogue, static text title, each of those structured components.

RUSTY:

I also discovered something very interesting. If you're on the desktop and you will see this PC icon, but then it also shows the date, the time, and it will show whatever items that you are showing in your system tray. If you put your finger on one of those items and you press the action button, on the Monarch, it will actually activate that item, like OneDrive, . you click that, it brings up the menu for OneDrive. I didn't know that would work. I was just testing and discovered that I could get the context menu or a menu for these items.

OLEG:

Where are you going with multi-line braille support? I guess the question that's going to be on everybody's mind is how about graphics and images?

JOE:

I would love to continue development of that, as management allows. Yes, we should be able to do some amazing things with graphics. Because graphic translation to braille is a complex thing, you can't simply throw a graphic on the display. You have to analyze it and figure out what's the most important part of the outline, or the features of the graphic that need to be translated to dots, because obviously you don't have color, you've only got low resolution. There is a lot of work to be done there, but yes, I would like to see us at least be able to do graphs in math editor. I would like to see us be able to do an outline of a font, or an outline of a graphic on the web and anywhere where there's a graphic give some representation of that graphic on the display.

OLEG:

Well, Joe, Rusty, thank you for sharing with us. We look forward to more people trying multi-line braille.

JOE:

Thank you.

RUSTY:

Thank you.

 

An AI User Survey By the AFB

OLEG:

Now, back to one of our favorite topics these days, and that's artificial intelligence or to be more specific, how AI is being used in real life. This is the subject of a pretty exciting study that's being done by AFB, American Foundation for the Blind. We were recently made aware of this study, and I thought it might be a good idea to invite some of our colleagues from AFB to talk about this here on FSCast. Let's start with a quick introduction.

ARIELLE SILVERMAN:

Hi, I'm Arielle Silverman and I'm the director of research at AFB.

STEPHANIE ENYART:

Hi, I'm Stephanie Enyart and I'm the chief public policy and research officer at AFB.

OLEG:

When I think of AFB, the first association that comes to my mind is the AccessWorld Magazine, of which AFB is the producer, but American Foundation for the Blind is definitely much more than the Access World Magazine. Could you please give me some background of who you are and what you do?

STEPHANIE:

AFB has been around for well over a hundred years. Our mission at the American Foundation for the Blind is to create equal opportunities and expand possibilities through advocacy, thought leadership and strategic partnerships. We do a lot in the space that is focused in generating knowledge that drives change. Both Arielle and I are part of the Public Policy and Research Institute at the American Foundation for the Blind. People have been relying on AFB related research for decades, as well as advocacy that has roots all the way back to Helen Keller when she was with us for about 40 years of her service. We are currently focused a lot on technology and transportation issues across the lifespan, so not for particular age groups in particular, but for all individuals. We're really excited about a lot of the work we've been doing in the AI space.

OLEG:

For those of us outside the United States and outside the US politics, AFB is not an organization of the blind, it is actually for the blind, but you have very good and well-established relationships with organizations of the blind, including the fact that your current president served for quite a while with ACB. Is that correct?

STEPHANIE:

Absolutely. Eric Bridges is our current president and he also came to us from a membership-based organization of the blind, which is called the American Council of the Blind. We work closely with many different organizations, not only ACB but also the National Federation of the Blind, as well as many other organizations across the blindness and disability fold.

OLEG:

Give me some idea about your effort in the realm of technology.

STEPHANIE:

We've been doing technology-related work, or work related to things that are emerging in society, all the way back to the beginning of our organizational space. Right now we're looking at emerging technologies like autonomous vehicles and artificial intelligence, as well as we've had a very long history in working in digital accessibility like website and app-related accessibility, so screen readers or screen magnification software and how it functions across different types of digital assets. There's a lot that we've done over the years, and some of this is our current focus.

OLEG:

Well, speaking about AI, it looks like today everyone and their brother is interested in AI, but everyone has a slightly different reason for that interest. What caused AFB to start AI-related research?

STEPHANIE:

Well, I think it's really the kind of space that we live in, in our disability ecosystem. We're really focused a lot on trying to be in that thought leadership conversation with key stakeholders, whether we're talking about companies that are developing technology or the academic sphere where people are studying and looking at these innovations closely through a scholarly lens, or you're looking in a more traditional public policy advocacy space, when you're talking about policymakers at the federal level. Those are the spaces that we stay very plugged into and we know that AI will have a huge societal effect and there are many, many opportunities for the blind and low vision community that are rooted to the AI innovation space. It made sense to begin to set the table on our research in that regard and do a research series.

OLEG:

The results of your research, are they going to be targeted, in the first, place towards AI developers, towards AI policymakers, towards companies or individual users, or is that a little bit of all of these?

STEPHANIE:

It's spaces from corporate to academic to traditional policymakers. These are all of the spaces and others that we lean into. We also share with consumers. It's a multifaceted audience that we leverage our research with.

OLEG:

What are some of the questions which you're trying to answer?

ARIELLE:

Right now, we're doing a survey, and we'll explain how you can participate, because everybody in the United States, who's at least 18 years old, is eligible for the survey. You do not have to have a disability. In fact, we really need a lot more people who don't have disabilities to fill out the survey, because we want to compare and contrast the experiences of people with and without disabilities. We're also looking for people with other disabilities besides blindness to fill out the survey.

On the survey we're asking questions about how people are using AI to try to see if people with disabilities are using AI in specific ways for assistive technology purposes, and if so, how helpful or unhelpful do they find the AI tools to be, in contrast to people without disabilities. We're also looking at how people are learning to use AI and if they're experiencing any accessibility gaps when they are trying to learn to use AI. We have questions about AI use in the workplace and policies in the workplace that might restrict the use of AI, experiences with AI or automated surveillance systems in the workplace that might disproportionately harm people with disabilities.

We have some questions for people who have looked for employment in the last two years to see if they interacted with AI or automated assessments that could be unintentionally biased against people with disabilities, or inaccessible, or not fully accessible. We have some questions about autonomous vehicles for those who have actually ridden in an autonomous vehicle, as well as general attitudes toward the relative importance of producing autonomous vehicles and also the relative importance of expanding public transportation. We have some questions about perceptions related to privacy and whether AI or humans are perceived as more private for assisting with certain types of tasks.

Those are the main parts of our survey. And like I said, anybody in the US who's an adult is eligible to participate.

OLEG:

Well, quite frankly, that's a huge survey. How long do you expect it to take to fill it out?

ARIELLE:

Fortunately, it is set up so that people only have to answer the questions that apply to them. If you indicate on the first page of the survey that you don't use certain types of AI, you don't have to answer questions about those types of AI. We're finding that people spend anywhere from 10 minutes to 45 minutes filling out the survey, depending on how many different types of AI they use. Of course, people can decide how much or how little they want to write on open-ended questions.

OLEG:

Is that anonymous? Because you're asking some employment related questions, and not everybody is going to be comfortable sharing that information.

ARIELLE:

That's a great question. We do have an informed consent form, and we ask people to write their names as a form of signature on the informed consent form, indicating that they consent to be in the study. However, when we get the data, the first thing that we do is eliminate all the names and email addresses from the data. The only people who would have access to the names of people, linked to what they wrote on their surveys, would be the two primary investigators on the study, but we would immediately remove the names and not look at the links between your name and what you write on the survey.

OLEG:

What would motivate a person to give 10 to 45 minutes of their life to a study like this? Why would somebody want to get involved?

ARIELLE:

Well, so we are giving away 20 prizes, so 20 $50 gift cards. I also think people want to be involved in the study because they want to influence the direction that AI is going in because we are going to be sharing the results directly with AI developers and tech companies. To have the chance to influence, for example, AI that's being used for image description, and to be able to share your opinions or your experiences about how well or how poorly the AI has been working for you. And your specific suggestions for improvement are going to be transmitted directly to those technology companies anonymously, of course. I think that can be motivating. Just to be part of making AI more inclusive, which is just a really hot topic right now, because AI is such a big deal right now.

OLEG:

I'm pretty sure you're going to do some extensive data analysis to see trends in the response. Are those results going to be published for the broader audience?

ARIELLE:

Absolutely. Early 2026, there will be at least one public report that we will publish on afb.org, along with a summary so that people can learn about the main trends and patterns that we observe. We'll also be sharing the results directly with the survey participants and presenting it at conferences and sharing it in peer-reviewed journals. There will be different ways of sharing the information.

OLEG:

What does somebody do when they want to take the survey? Where do you find the link to that?

ARIELLE:

So the easiest way would be to go to AFB.org/AISurvey. There will be a description of the survey. There will be a button or a link that you can click on to start the survey. What will happen is that you will fill out a really short form called an interest form. The main reason that we have people do this is because there are a lot of problems with bots and scammers trying to take surveys for money, and we want to make sure that the bots and scammers can't get into our survey. We have everybody fill out this interest form first. Then within 24 hours somebody will look at your response and verify that you are human, and you are legitimately a person who should fill out the survey. Then after that happens, we will send you an email with a link to the actual survey and then you can take the survey.

I will mention that there is also an alternative way to take the survey. If you would prefer to dictate your answers to a researcher rather than taking it online, you can schedule an appointment to take the survey with a researcher. You can also schedule an appointment to take the survey in American Sign Language if that's how you communicate. When you fill out that interest form, you will be able to indicate whether you want to take the survey online, or with a researcher, or via ASL.

When you fill out that initial interest form, it will ask you how you heard about the study, so please say that you heard about the study via FSCast.

OLEG:

Okay, and the survey closes on?

STEPHANIE:

September 30th.

OLEG:

Depending on when you're listening to this, you've got about a month to fill this out to make your voice heard. Probably my final question and that goes to Stephanie, so in today's world, how powerful do you think or how successful do you think AFB is in making the voices of the blind and visually impaired people heard? Here's why I'm asking. This is a very controversial world, so a group like AFB would have to reach out across the different areas in the society. How successful are you at this?

STEPHANIE:

I think that we're quite successful. I could point to certain things, but I feel like we're in a moment where people are searching for data and reliable data to be able to drive important decision making, like whether or not to create a certain aspect of a new technology tool, or how to regulate or not regulate related to technology. Those sorts of decisions are decisions that often are rooted in a body of evidence and research. Whether you're a policy maker, or whether you're a tech innovator, you're striving and thirsty for that information from a trustworthy source. We're really the only outfit in the disability space that has a team of PhD-level social science researchers with lived experience of disability. We have all of the technical capacity, as well as the lived experience, to tailor studies like this that have the reliability of any other academic scholarship space.

What you're investing in, as you spend time in a study space like this one, is you're putting your voice and your experience among many other voices that can then be used by a variety of key stakeholders on issues that we are studying and focusing on and that affect all of our lives. I feel like this is a really wonderful opportunity for the disability community, because AFB does not sit in other spaces, we sit in a space where we try to study and cultivate that knowledge. Then literally through our long-term relationships serve as an ambassador for our community in helping others understand the intricacies of issues like emerging technology.

OLEG:

Stephanie, Arielle, thank you for taking part in our FSCast. I do hope many of our listeners will take part in AFB's survey.

STEPHANIE:

Thank you.

 

News From the Training Department

OLEG:

Now I'm joined here in our virtual studio by Elizabeth Whitaker. Elizabeth, hello and nice to be hearing from you again.

ELIZABETH WHITAKER:

Hello. It's great to be here as always.

OLEG:

So, tell me about August.

ELIZABETH:

Well, August was a really busy, fruitful month. We had a couple of webinars that were really well-attended. The first was our AI webinar. This was the first in our new series. This is not just a series of webinars, it's a complete training package. We have a new webpage that is on our training page, it's actually called Freedomscientific.com/LearnAI. This is where you can go to find out about our upcoming AI webinars. The webinar that we did in August is archived there, as well as on our Webinars On Demand page. Here you can also get resources and tips and practice exercises. We had over 600 people attend this first webinar, because we know that AI is definitely an integral part of our daily lives, everything from home, work, school, everywhere. That was our very first webinar, and we are really excited about the second one. Then also in August we had a PDF tips webinar, and that was also very well-attended.

OLEG:

Yeah. That was Six Tips For Managing PDF Files. I was reading this and I thought, "Hey, which tips do I know? How do you manage PDF files? Okay, I can create them, I can rename them, I can delete them, I can copy them, I can paste them." That's only five, and I bet these are not the tips that you were talking about.

ELIZABETH:

No, but those are good ones. We covered a lot of things around how to open files in Adobe Reader versus on the web. We covered reading order. What is it? What do you need to know about it? We covered a lot of things like that, a lot of the settings that really make a difference when you're trying to read mostly an untagged PDF and it's not working out so well. If you're having difficulty, what can you do to make it read better? That was really what it was centered around was tips for settings, and just understanding PDFs, just gaining a better understanding of PDFs.

OLEG:

That webinar is also archived on our webinars page, right?

ELIZABETH:

Yes.

OLEG:

Okay. What do we look forward to for September? What are you working on for September?

ELIZABETH:

We have our second webinar in the AI training series that's going to take place on September 4th, and that's on a Thursday at noon Eastern. This is Tips, Tools And Techniques: Where To Begin. We're going to talk a lot about prompting, a lot about things to consider when you start using AI. How do you talk to it? How do you have a conversation with it? How do you get the information that you are seeking? So that is September 4th. Then on September 18th, we have a webinar on Google Sheets. We've covered Sheets in the past, but it is time to update that training. We've gotten a lot of requests, and so that's going to be Thursday, September 18th at noon eastern.

OLEG:

Cool. Any news on InsertJ Club, by any chance?

ELIZABETH:

Absolutely. We just had another training Q&A at the end of August, and this is where people can come ask training questions. Those are always a lot of fun, we get a lot of great questions. Then we will send out the answers to those questions to InsertJ Club members in September. That is something to look forward to. We'll have additional training for InsertJ Club members and other things being sent out as well. If you need more information or you want to join, go to FreedomScientific.com/InsertJClub.

OLEG:

Thank you, Elizabeth, and thanks to all folks working for the training department. This brings us to the end of this episode of FSCast for August 2025. Now, for your training questions, please write to training@vispero.com and for FSCast questions and feedback, write to fscast@vispero.com. Elizabeth, I hope to see you next month, and also I hope to see all of our listeners and those who will be joining us maybe for the first time, either for the webinars or for FSCast. Thank you, and have a good time. Bye.

ELIZABETH:

Bye!