FSCast #243 FSOpenLine

March, 2024

RACHEL BUCHANAN:  Hello, and thank you for joining us for FSOpenLine Freedom Scientific’s Global Call-In Show.  We thank you for joining us for this March 2024 episode.  And let me tell you a few little housekeeping items just in case you haven’t joined us on this show before.  If you’re on the Windows desktop, you’ll press ALT+Y (Yankee) to raise your hand.  And if you are on dialed-in via telephone, you’ll press *9 back to back to make sure your hand is up.  Now, without any further ado, let’s go ahead and introduce my colleagues who are joining us this evening.  We have Mohammed Laachir, Glen Gordon, and Ryan Jones, who are all joining us this evening.  Hello.  How are you, gentlemen?

RYAN JONES:  Hey, Rachel. 


GLEN GORDON:  I was just thinking back to our first OpenLines that you hosted, and you sounded a little more tentative then.  You sound just completely in control these days.

RACHEL:  I was so nervous that first one.  Very nervous.

GLEN:  Yes.

RYAN:  I felt like I was on a professional radio show just for the introduction.

GLEN:  Yes, exactly.

MOHAMMED:  I did, too.

RACHEL:  We have some exciting things to cover.  We had a March release that was pretty huge.

MOHAMMED:  Right.  So in March we released really a blockbuster feature, which is called Picture Smart AI, and that is an update to the Picture Smart feature.  Glen has created a great demonstration of it on the FSCast that came out today.  So go listen to that because it tells you comprehensively what you can do with it, and it shows you some of the things that now have become possible.  It will allow you to describe pictures in detail – in great detail, in fact – and let you connect with pictures that your family sends you, with charts that you might use for your work or for your school, with all sorts of stuff that you weren’t able to do before.  In fact, you can even capture screenshots.

So all of that is great.  And, you know, there is a lot more where that came from because AI is slowly but surely growing in its capabilities.  And we are closely tracking that and adding as much as possible to the screen reader as we can there.

GLEN:  At least for me, it has a very different role than it does for other people who have some vision left, or at least had some vision in their lifetimes, because I don’t really care about detailed descriptions of scenes and so forth.  But I care about, you know, descriptions of diagrams and other things where you’re getting additional information than often is written in text.  But I know for others of you, you have different views and different things that are more important; right?

RYAN:  Yeah, I was low vision growing up.  So even though I have little vision left, I still have a lot of memory of what things look like.  So I connect with some of the descriptions that way because it kind of brings back some memory, I guess, for me, like in some of the details that I hear in some of the images.  So, yeah, I think the experience is different based on kind of your life experience and the amount of vision you have or didn’t have.  And Mohammed, I think you’re probably in a kind of similar boat as I am.

MOHAMMED:  Yeah, yeah.  So it really allows you to visualize what’s going on in an image, and that’s kind of great because you sort of lose that information as you lose your sight.  And I, of course, completely lost my sight.  I don’t have even light perception left.  So it is just, it is awesome.  It’s almost like getting a little bit of your sight back, and that’s great.  But not to say, Glen, that the diagram descriptions and all that sort of stuff is not useful because all of that is also useful.  So, yeah, it is just an awesome feature all the way around.

RYAN:  One of the things that’s been fun for me is just learning, like finding different ways that it’s been useful.  I mean, I’ve found it to be useful when working in PowerPoint presentations, when I wanted to get a general overview of what a slide actually looked like.  Your JAWS would read the text just fine.  And there were graphics, and maybe the graphics were labeled.  But to get an overview of kind of how the slide was laid out, I’ve found that to be useful.  Taking screenshots of things when I’m trying to, I’ve even done this on web pages to try to kind of get an overview of what the whole page looks like.  Again, just sometimes for curiosity, sometimes because I was testing something or evaluating something.  But I’m curious, Rachel, Glen, Mohammed, what are some other ways that you all found it to be useful as you’ve been using it?

RACHEL:  I really enjoy using the screenshot command to get basically the same information you just mentioned, Ryan, but from busy web pages or just to get an overall idea of what I’m dealing with before I start, especially if I’m having a little trouble doing my typical navigation pattern.

GLEN:  I have found it surprisingly useful creating charts in Excel.  I’m not an Excel expert, and I’ve always shied away from charts because I had no way of confirming what I had created.  So it was sort of a one-way path if I didn’t involve another human.  And I found it pretty effective creating a chart in Excel and then asking Picture Smart AI to describe it.  You get some idea of whether or not you’ve come close to getting what you were hoping for.  So pretty surprising and quite powerful.

MOHAMMED:  I use it for a whole range of things.  For example, when I’m shopping online, I look at pictures of products and see what the product is all about, how it looks, whether it has physical buttons, all that good stuff.  I once used it, used the screenshot feature because I couldn’t find a button with JAWS, and it was hidden from JAWS because it was greyed out.  And so ChatGPT told me that.  And that was also great to know that, you know, there was this button that I was trying to find that was there, but I apparently had missed something and had to complete some other part of the form first before that button became visible and usable.  So all that good stuff.  And then sometimes even just photos that I got sent or that I found on a web page that illustrated the news, or on social media, and all that stuff is also now accessible.  So I use it quite a bit, actually.

RYAN:  And Mohammed, we put in descriptions from two different AIs, which is actually something unique.  And I don’t know that any other product, there could be, but I don’t know that any other product out right now is providing dual descriptions.  Why did we do that?  And which ones did we use for it?

MOHAMMED:  Well, just a small insight into our development process.  At first we did it just to test; right?  We had these two large language models, you call them, so AI models.  And we thought, well, we really need to see which one is better than the other.  And so we put them both in there and started looking at it and testing with it.  And we discovered that actually having two descriptions side by side is extremely useful, A, because the models usually focus on different things, and so you get a more complete picture; B, because models sometimes phrase things differently, and what can be very unclear when one model actually points something out can be much clearer when the other one points it out; and, three, and I think the C – I should stick with the list numbers or letters that I picked first.  So C is that sometimes these models get it wrong.

And we call that hallucinations.  Sometimes these models see things that aren’t there or see something and then make up something else that is related to that.  And we found out that, if you have two models, that you can cut down on the hallucinations because, if one hallucinates, the other one probably will not.  And so it will tell you which parts of what a model says are dubious, and which parts to check, either by rerunning Picture Smart AI again, or by getting help from a sighted person to look at things.  So that’s why we left it in.  Originally, it wasn’t our idea.  Originally, we thought, well, we’ll look at which one is the best, and we’ll put that one in.  And it turned out that, you know, having two is much better than having one, even the best one.

RYAN:  Well, I know we want to get to questions, but I did want to just go ahead and address one of the big questions that I’ve been getting a lot is the number one requested add-on feature to this is when can we ask follow-up questions about the images, or when can we kind of have chat conversation about the image?  And I will say that that’s something we’re actively working on right now.  So I expect that that will be out.  Mohammed, your team is actually working on that from the engineering side.  So no pressure, but we’re all looking forward to being able to ask follow-up questions and get answers about the images.  And I think that’s going to be really exciting when we get to that point.

MOHAMMED:  Oh, so am I.  I mean, no pressure even from myself, but I really want that, too.

RYAN:  And it’d be interesting to hear as we kind of move into questions to hear from you all out there.  Who’s using Picture Smart AI, and what kind of use cases are you finding it for?  Because I think there are so many different areas that people are using this, and it’s great to share with each other how we’re finding it useful because we’ll all learn from each other and find new ways to help this to benefit our lives.  So I just kind of throw that out there as we get going.

RACHEL:  Anoah, I’m going to ask you to unmute.

ANOAH:  Thank you so much for having me.  I actually wanted to bring up something entirely different than Picture Smart.  I wanted to bring up JAWS support of international phonetic alphabet, verbalization, and braille output.  I’m currently pursuing my Bachelor of Music, and we use the international phonetic alphabet quite liberally, as do folks in wide-ranging fields from speech pathology to language studies.  And a gentleman named Robert Englebretson, who is a blind linguistics professor, has written code which can be slotted into JAWS’ various pre-supplied SBL files and a JBT file, which can also be loaded into JAWS.  But this is very non-standard, shall we say.  And so it would be of great benefit if Freedom Scientific would consider implementing these rule sets as standard.  And so I’d like to ask, has there been any sort of discussion about this as a valuable thing to do?

GLEN:  So I’m wanting to crawl under the table right about now.

ANOAH:  Okay.

GLEN:  Because I interviewed Robert Englebretson on FSCast probably a year and a half ago, two years ago.  And I promised that we would get this stuff in, and one thing has led to the next.  And I think we’re going to need to come up with a way of saying special dialects and add those on top.  The problem that we’ve had is that IPA is not necessarily an add-on because some of the characters have different purposes in other languages.  And some of the symbols have different braille meanings depending upon IPA.

And so what we originally thought was, oh, just let’s drop this in, and it’ll work perfectly, has sort of morphed into this whole topic of how do we allow people who need it to turn it on without causing life to be difficult for people who don’t need it and might get unexpected results.  So yes, we are very interested, and we realize that specialized audiences are, you know, our bread and butter in many cases.  And so we just need to talk about it and figure out where the priorities sit and see what we can do.

ANOAH:  Fair enough.  And I completely understand that.  And I want to congratulate y’all on the work y’all are doing.  I just want to say split braille is fantastic.  Ironically, for the same purpose, IPA transcription.  And I’m sure that I will find great uses for Picture Smart in the near future.  So thank you so much for everything that you do and continue to do.

GLEN:  Thank you for joining us, I think for the first time.

ANOAH:  Yes, this is my first time here.  I have listened previously and keep in touch via the Mastodon profile and other such things.  But this is my first time here live.  So thank you very much.

RYAN:  Thank you, Anoah.

MOHAMMED:  Thank you very much.

RACHEL:  Yep, thank you for joining us.  And next I’m going to ask Deborah Armstrong to unmute.

DEBEE:  Okay, I actually have a feature request and a comment on Picture Smart and a question.  So I’ll try to be brief.  Feature request is just what happened now.  You know, when you ask a participant to unmute, that box does not gain focus.  And I don’t know if this is something Zoom needs to fix or the scripts need to fix.  But it’d be really nice if it popped up on my Braille display, and it went ding or something when you ask people to unmute.  And perhaps you can fix that in the scripts.

GLEN:  Not that you’ve ever tried running a different screen reader.

DEBEE:  Yes.

GLEN:  But if you hypothetically ever try another screen reader, do you know, because you’re able to be psychic, if it works better with those?

DEBEE:  No, it does not work better with those.  This is something you’ll have to fix in scripts, or you’ll have to bug the Zoom folks about.  But one of the reasons it takes so long for people to unmute is they have to keep tabbing around to find it because it doesn’t gain focus.

GLEN:  That’s a good idea.  And for those of us who never get asked to unmute because we’re on the hosting side, we don’t feel that same pain.  So thank you very much.

DEBEE:  It would be great if it just popped up on the Braille display or, you know, if you have your speech set to Speech on Demand, if it would go ding or something.  I mean, you could always have that as a feature someone could turn off.

Okay.  Picture Smart.  You know where it’s really useful is if someone else is out and about and documenting things with their phone, and they send you those pictures, you have a clear idea of what they are.  Or if you’re out and about, and you document stuff with your phone, it’s just so nice to be able to put all of that together in one place in a folder on your computer, much easier to store than on your phone, and have Picture Smart to describe it to you later, just as sighted people save pictures on their computer.  So that’s my use case for it.

And my question is, when I run across what I’m pretty sure is a bug, how am I supposed to report it to you all?

RYAN:  Probably the easiest channel to get things in to us is through our tech support team.  They’re usually kind of the first line to help take that kind of information, and they can enter it into our bug tracking system and get the engineering team to look at it.  And I know you already know this.  But just for others that are out there, the more details you can provide, the better.  So if you can tell us what version of software you’re running, if it’s a bug in a certain application, what version of JAWS, what operating system, give us as much detail as you can so that we can try to reproduce it ourselves and then be able to fix it.  But I would typically start by emailing our support email address, support@freedomscientific.com, or you can call in, if you don’t want to email, and talk to one of the agents and feed them the information and describe it to them, as well.

GLEN:  And very often the hardest part for us in fixing something is being able to reproduce it.  So those steps for reproducing, especially if they’re non-obvious, are really critical.

RYAN:  And I believe about a year ago, Glen, Oleg on our team did a guest hosting of FSCast.  I think it might’ve been April of last year.  And he actually, if anyone’s really interested in this, he described more about how to write bug reports, like how to give us all those details.  So if anyone really wants to learn it, go back to that FSCast and listen to it.

GLEN:  He did a great job.  And he is such a great bug reporter that he sets the bar high.  I just want to say, if you go listen to his report, yes, if you do everything he said, that’s lovely.  But if you can’t, and you can only do a small portion of it, we’re still interested in hearing when things don’t work well.

RYAN:  Absolutely.

RACHEL:  Thank you, Deborah.  I’m going to ask Misty to unmute.

MISTY:  So first-time listener, first-time caller, as well, just like Mr. Anoah earlier.  So first comment is actually Mr. Anoah mentioned that he had a blind linguistics professor.  Well, so did I up at Indiana University.  So blind linguists definitely exist.  Other people who might use IPA definitely do exist.  I thought I would just kind of throw that in there as another example of that.  And of course, as a classicist by training myself, one pipe dream that I would love to see at some point as far as JAWS speech would be support for Latin pronunciation or ancient Greek or Sanskrit, but that’s probably a pipe dream.

And secondly, I really do love Picture Smart mainly because now we have an option for the computer.  So that’s great.  So I know that the JAWS Key+SPACE and then P and then SHIFT+S gives you both ChatGPT-4 and Gemini.  But what does just simply hitting S after the other commands do?  Does that give you like a summary of both, or like one or the other?

MOHAMMED:  Okay, so I can take this one.  Thank you, Misty, for that question.  INSERT+SPACE or JAWS Key+SPACE, followed by P, followed by actually any letter without SHIFT will give you – at the moment.  This may change because we are constantly evaluating the models and how they perform and how they do.  But at the moment, it’ll give you ChatGPT with a special prompt.  And that special prompt will make sure that ChatGPT actually does not overcomplicate and get really wordy and flowery, but stays to the point.  So you’ll get a short and sweet description that is accurate and that is well-worded, but it’s not very, very large.  And if you then click the More Results link, which will actually show up underneath ChatGPT’s response, you will get as if you press the shifted commands.  So all the commands that you press without pressing SHIFT will give you that shorter ChatGPT response.

RYAN:  And Mohammed, maybe we can take a quick second to remind everybody of some of those different commands.  So INSERT+SPACE and then P.  And then what are some of those?

MOHAMMED:  I think the one that I use the most is INSERT+SPACE+P, followed by C or SHIFT+C.  And that will give you a description of the control.  And that’s a little bit of a misnomer because what it does is actually takes the current element, and it gives you a description.  So, for example, in Google image search, all the images are presented to the screen reader as buttons, but you can still use Picture Smart on them with C.  So it will go recognize that button.  It will find the image inside that button and actually describe that to you.  Then we have INSERT+SPACE followed by P followed by W, and that will make a screenshot of the current window that you’re in and actually describe that.  Then we have S, the one that Misty talked about, and that one is for the entire screen.

And we have a couple more.  So we also have INSERT+SPACE followed by P followed by F, F for file.  And that will actually describe the current file from File Explorer.  So if you’re in File Explorer, you’re on a JPG or a PNG file, you can use that command.  For help, you use the question mark.  So INSERT+SPACE+P+?, and there we will tell you all about all the keystrokes that you can actually press and what they do.  There are more commands, but those are the most widely used, I think.

GLEN:  I just want to mention Clipboard because I forgot to talk about it on the FSCast demo.  I can’t think of a specific situation where there’s something on the Clipboard that you might want to have Picture Smart recognize, but that’s JAWS Key+SPACE+P and then B, as in board.

MOHAMMED:  Right.  Now, we might as well do the last one, too, which is INSERT+SPACE+P followed by A.  And that will actually take a picture with the Pearl camera or a flatbed scanner and describe that.  So now you know them all.

RACHEL:  And when we were doing user testing, I had one of the users was scanning greeting cards and reading and looking at the different images and also reading them with Picture Smart AI.  So there’s another little use case.

RYAN:  Oh, that’s a nice use case.  I like that.  -

MOHAMMED:  Yeah, it is, cool.

RACHEL:  Yup.  All right, let’s go on.  Thank you, Misty.  And let’s talk to David.  I’m going to ask you to unmute.

DAVID:  So I’ve been blind for 20 years, and I was a very visual person before that.  I started with JAWS 5.  And one thing I really like about JAWS is how, you know, from time to time, fairly frequently, you know, you come out with some great new features.  I thought the best thing that I had seen recently was just the Face in View; but this Picture Smart, like, blows that one out of the water.  Glen, I really enjoyed your demo yesterday because I didn’t have an idea that you could pause a video and get that described.  So I listened to that, and I didn’t even finish listening to the podcast because I wanted to go right to YouTube.  And I went to YouTube, and I loaded Michael Jackson’s “Thriller,” and I just had a great time pausing Michael Jackson’s “Thriller,” like, in several places and listening to that.

RACHEL:  Oh, my god.

DAVID:  And I’ve also, like, gone on to some museum websites because they have picture galleries, and you can sort of take a virtual museum tour of the paintings and all of that.  So kudos to you all.  I mean, it’s just an amazing feature.  And, you know, Microsoft Copilot and Google, you can load a picture in and have it described.  But what really is great about Picture Smart is just how, you know, it’s just – it’s right there at your fingertips.  It’s not like a multi-step process.  So I knew those things existed.  I do them from time to time.  But this is something you can do five, 10 times a day, whenever you feel like it, because it’s so quick and easy to do.

But anyway, I have an unrelated question, and it relates to place markers on the web.  I’ll set place markers in some places, and they stay around for a long time, like maybe months on end or something.  I’ve set them in other places where they don’t even stick the first time.  And I’ve put them in other places where they stick for today’s sessions, tomorrow’s session, and then they’re gone, you know, after that.  And I’m wondering if, one, you’re aware of that, and if there’s anything that can be done about it to make them stick around?  Or is there something underlying on those websites that make it so that that cannot happen?

GLEN:  We realize that this problem exists.  The problem is we don’t have a particularly elegant solution for fixing it.  Typically, what we do is we find the spot where you want to set the place marker, and then we find what we think is a fairly, you know, steady element from which to compute the offset; right?  If you’re setting a place marker in the middle of a paragraph, you have to make it relative to something because, if you, you know, make it relative to the beginning of the whole web page, that’s not going to last at all.  And our logic for deciding where to compute the offset is probably very much impacted by how much a page layout changes from time to time and just sort of the luck of the draw as to whether or not the offset from what we decide as the starting point is actually consistent.

It would be great if you have some examples that are public enough that we could reproduce and try where you’re finding that it’s inconsistent.  And we don’t need a lot.  You know, two, three, four would be great because this is technology that we created many, many years ago, and we’ve not gone back other than with the occasional very specific bug to sort of revisit and see what, you know, 10 years down the road we can come up with as a way of making it a little bit more stable and consistent.

DAVID:  If I find some fairly public places where they just don’t seem to be sticking, I’ll send those to you because it is a great feature.  Just be really nice if it could be more consistent.  So, anyway, thank you.

GLEN:  That would be great.

DAVID:  Yeah.

RYAN:  Thanks, David.

RACHEL:  Matthew, I’m asking you to unmute.

MATTHEW:  First of all, before I ask my question, you were just asking about the Zoom thing.  You have to make sure, if you’re on a computer, you have to make sure you hit that OKAY button when you’re in a meeting that’s being recorded.  Otherwise, it won’t let you unmute.

RACHEL:  That is true.

MATTHEW:  I use the Mozilla Thunderbird.  It’s 115.9.  What basically happens is I could be on my desktop, for example, not doing anything, and then when the notification comes in, I’m on the JAWS users list.  So I’ll say something, it’ll say something like JAWS users X, where X is the discussion, the subject, and then it’ll say Glen Gordon or whatever, but it’ll repeat it a couple of times before it finally hushes.

GLEN:  Well, we are giving some love to Thunderbird, although I don’t know that we have focused on this specific issue.  But I know how to track down the person who’s been working on that.  And so, I will raise this as a topic that at least we look into and do our best to try to resolve it.  I think in the May update there will be some improvements to the newer versions of Thunderbird.  I can’t say 100% it’ll be May, but that’s – that’s what we’re aiming for.

MOHAMMED:  One good thing possibly that you can look at is in your notification history.  So that is INSERT+SPACE followed by N for notification.  If you find that entry in there multiple times, it means that Thunderbird is sending that notification multiple times.  And we still need to deal with that; right?  But that is not JAWS double speaking, that is JAWS being double prompted.  Still a problem that we will look at and try to solve, but it at least gives us a little bit of a feeling of where to look.

MATTHEW:  I just thought I would mention it and see where it goes.  But thank you very much, appreciate it.

RYAN:  Thank you, Matthew.

RACHEL:  Steve, I will now ask you to unmute.  Steve Cutway.

STEVE:  So Picture Smart AI, I have to congratulate everybody.  And Glen, thank you very much for the podcast because I was having some difficulty, largely because of my aging brain, probably.  Nothing seemed to be working until I realized I was not pressing the follow-up key, be it C, S, W, or B.  I have found it very handy, Picture Smart AI, to describe album cover art or photographs.

Wikipedia, for example, if I ask about a particular album, they always have, or usually anyway, have a picture or image of the album cover.  And I found it very interesting to be able to understand better, you know, art on covers.  A good one to try might be the “Abbey Road” cover.  I understand F for file, and I thought I understood W for window, and S for screen, and C for control.  But if, as I’m sure everybody experiences these days, you get email, it’s sometimes full of graphics.  And if I want to know what those graphics are, is C the appropriate key, the best key to press?  Maybe that’s for Mohammed rather than Glen.

MOHAMMED:  This is kind of a tricky situation.  For Outlook, if you want to only describe one image, you’re going to have to make sure that you enable UIA support for Outlook read-only messages, and that can be done through quick settings.  INSERT+V is the keystroke to get there, and then you find UIA support, and it will bring you right there, and you can turn that on.  And then we are much better at locating graphics in read-only messages.  Otherwise, it is kind of hard, and it may sometimes work, but it may sometimes also not work.

STEVE:  Right.

MOHAMMED:  So UIA for read-only messages is the thing that you need to enable in order to get C to work in Outlook.  Now, W will always work.  And if you want to get an overview of how the email is laid out, then W is a perfect way to do that.

STEVE:  Right, right.

MOHAMMED:  You make sure that your email is maximized, and just press INSERT+SPACE+P+W, and there you go.  It’ll describe the entire email rather than just one graphic.  So those are your two options.

STEVE:  Right, so UIA support, it doesn’t have any negative impact on any other aspect of Outlook, Read Messages or anything like that?

GLEN:  I’ll offer some ideas about that.  But I want to say that, once you’ve turned that option on, you need to fully exit Outlook and reopen it.  You don’t need to shut down JAWS, but you do need to restart Outlook before it’ll work.

STEVE:  Yeah, no problem there.  I get that.

GLEN:  We originally were going to turn this on for 2024.  And once we had it on for our private beta group, we got a whole bunch of pathological messages that took a really long time to open in Outlook with the UIA support enabled.  And so we did two things.  We disabled it for the release, and we also worked with our Microsoft contacts to, I think, resolve most of them.  They certainly resolved the most hideous of them.  So assuming you’re running Office 365, I don’t think there’s any downside with turning UIA on at this point.

RYAN:  Depending on what email client you use, that you can also open a message in a web browser, and that then should give you access to the images in that message in your browser, which then you could move to with the virtual cursor and use INSERT+SPACE+P followed by C.

GLEN:  I think in Outlook we have unlabeled graphics not announced by default.  Am I remembering correctly in that you actually have to turn that on, too?

MOHAMMED:  Yes, actually, you are.  That is entirely correct.  Yeah, so in Settings Center you can turn on announcement of unlabeled graphics.  Otherwise, you might just miss graphics because previously, right, if you had an unlabeled graphic, there was nothing you could do with it.  Just clutters up your virtual document.  But now you might actually be able to do something with it.  So yes, Glen, that’s actually a good one.

GLEN:  And I just want to say, Steve, in terms of losing brain cells, I’m not that far behind you.

STEVE:  I know you’re not, my friend.  I know you’re not.

GLEN:  Yes.

RACHEL:  Thank you, Steve.

STEVE:  Take care, all.

RACHEL:  I’m going to ask Tedros to unmute.

TEDROS:  Thank you all for what you are doing.  Glen, I really enjoy listening to FSCast every month.  It’s really a pleasure listening to you.  I don’t really know whether this is related to JAWS or Windows, but the behavior is sporadic.  And every now and then, JAWS gets focused to the system tray, and the chevron is activated, and all the hidden applications are visible.  So it disrupts the flow, let’s say, if I am in Microsoft Edge doing something, JAWS gets focused to the system tray all of a sudden.

RYAN:  And the question is if there’s focus actually moving, like something is triggering focus.  And you’d be able to tell pretty easily if you use Narrator, for example, and it still happens or not.  Can you let us know, once you test a little bit, and see if this only happens in JAWS, or if it happens with other screen readers?  This would be really – this is one I’ve never heard of before.  So it has my interest, for sure.

TEDROS:  Okay.  This is actually a feature request with regards to Zoom.  I was wondering whether you could possibly institute some kind of keystrokes to populate raised hands, for example.

RACHEL:  So in the participants list, when you use ALT+U, you should be able to tab around and use HOME.  All of the raised hands float to the top of the list.

GLEN:  I had not realized that they are all moved to the top of the list.  That’s a useful tip.

RACHEL:  It is, yeah.  It’s something relatively new that they’ve added.  Not that new, but it’s helpful.

TEDROS:  Okay, thank you.

RACHEL:  I’m next going to ask Ines to unmute.

INES:  Okay, so I have a quick question, a couple questions, actually.  Would it be possible for AI to write JAWS scripts or provide a UI to interact with an inaccessible application?  I know Apple does this with VoiceOver to some extent.  I think they have a neural unit.  They do it on device, the way I see it.  But it’s limited, but it lets you navigate controls and apps that aren’t designed properly based on the AI analysis of the screen.

MOHAMMED:  So this is kind of the Holy Grail; right?  If we can get this right and working well, then you’ll have gold in your hands.  And yes, this is something that isn’t as possible currently; but things, like I said before, are moving quickly, and we are definitely keeping it in mind.  If it becomes possible, we would definitely consider do it.

RYAN:  I think you’re going to see in the next year, and especially year to two years, a lot more will open up with onboard processing.  So it’s going to increase the pace of some of these things even further.

INES:  Regarding Discord accessibility, so I know JAWS added enhancements to the keystrokes and so on, but there is one issue that’s also present in iOS.  I’ve not been able to use it with other software in Windows.  But when you turn off the navigation keys, and when you navigate a list of messages in Discord, when you go to the last message, and you’re at the last line of that message, if you down arrow once, when JAWS says “list,” and as soon as you see that JAWS is in that area, it will scroll all the way back from the chat, and you’ll be in, like, a very old part of the thread.  For some reason, JAWS will lock up the computer.  I have a relatively fast computer, and it will shift focus and lock up the computer and JAWS for a couple of seconds.  So do you know what’s going on?  Is that something that’s been reported?

GLEN:  And this is just with a standard Discord client for Windows?

INES:  Yes, the same client.  Yes, mm-hmm.  And Apple, for some reason, VoiceOver does the same thing on the iPhone.  It will behave the same way when you go below the last line of the last message.

GLEN:  Sounds like a bug on their part.  Do you know about Doug Lee’s Discord scripts, by the way?

INES:  No, I haven’t tried those.  Does he have Discord?  I remember the old Skype scripts, but I didn’t know he had Discord scripts.

GLEN:  Yeah, he does have Discord scripts, and he’s actively maintaining them.  So it may be worth going to, I think it’s dlee.software now, I think is what he has.  Either dlee.software or dlee.org.

INES:  Mm-hmm.  Yeah, I will check those out, as well.  And the last thing I want to know is, for Screen Curtain, is it possible to make it the default?  So, like, if you have to reenable it each time you need to restart JAWS or restart your computer, would there be a setting in Settings Center to make it come on with JAWS as a default behavior?

GLEN:  It’s technically really easy to do.  The question is, we’ve often been concerned that somehow we would manage to make it impossible for someone to get sighted assistance and to get out of that situation.

RYAN:  We’ve not exposed that for that reason in the past.

INES:  I mean, they could kill JAWS; right?  And that would restore the screen when there was – if something became unresponsive?

GLEN:  If they can figure out how.  But think of it this way.  So you’ve done this.  You have a novice user.  They don’t know how to shut down JAWS.  They can’t get help; you know.  Now you’re forced to reboot.

INES:  I would just do ALT+CTRL+DELETE or turn on Narrator, I mean, CTRL [indiscernible].

MOHAMMED:  You’re not a novice.  And think about it, on an iPhone, it’s really easy because either turning Screen Curtain on or off is a three-finger triple tap, and turning Voiceover off is just three times pressing the side button really quickly.  So it’s extremely easy for us because we have a whole keyboard to work with.  It’s just as easy, but people may not remember.  And on the iPhone, it’s much easier to remember, which is also, I think, why Apple felt confident in letting people actually have a persistent Screen Curtain.  For us, that’s a bigger ask, I think.  So that’s why we haven’t done it.

GLEN:  So, I’ll tell you a funny story about this.  I got a battery replaced in the iPhone, and my wife took it to a local repair shop.  And he said to her, “I’ve replaced the battery for your husband, but I don’t know what he did to his iPhone.  I don’t see anything on the screen.”

MOHAMMED:  Oh, actually, I had something similar where a lady approached me on the bus, and she said, “Can I tell you something?”  And I’m like, “Yeah, yeah, you can tell.”  And I heard like the pity dripping out of her, dripping from her voice.  And she said, “You know, you’re swiping on your phone, but nothing’s happening.  The screen is dark.”  I should have had a Screen Curtain.

INES:  I actually have one.  I had someone ask me if I had a problem with my phone when I was using it.  I was on a bus.  It’s the same.

RYAN:  I think we’ve all been there.  Thank you, Ines.

GLEN:  Yes, thanks.

RYAN:  We appreciate your time.

INES:  Then thank you.

RACHEL:  All right, I’m going to ask Ignacio to unmute.

IGNACIO:  First of all, congratulations on the Picture Smart improvements.  I’ve been using JAWS since version 1.1, so... 

GLEN:  Wow. 

IGNACIO:  July or August 1995, that’s my first time.  Anyway, but my question is actually less technical to a lot of your other listeners.  For some reason, and I don’t think this is just this with the latest update, but there are times when I’m doing something, I don’t know exactly what it is, that’s my problem, that the JAWS Tandem comes up.  So somehow I am pressing some key combination that brings JAWS Tandem up, and it takes me a little while to figure out what happened, and I have to tap to cancel and close it.  Do you happen to know what that key combination might be that I’m doing?

RYAN:  INSERT+ALT+T, I believe.  Yeah, that brings up the Allow Access to my computer dialog.  Is that what’s coming up, where you tab through and there’s an Allow Access button, a Cancel button, Meeting ID?

IGNACIO:  Maybe I didn’t tab all the way through because all it tells me is to enter the code, right, given to me by the other person.  So, but I’ll try that, unless my ALT+T is at times locked.

RACHEL:  Right, that’s what I was going to mention.  Are you doing INSERT+T and your ALT is sticky?

RYAN:  The ALT key is sticky.

IGNACIO:  It could be, it could be, now that you mention that.

MOHAMMED:  So a good trick that you can use to find keystrokes, sorry for interrupting, is INSERT+SPACE followed by J, and that will bring up Command Search, which is what I just did to find this out.  And then you can type “tandem” in there, and it will tell you, as long as you are on your desktop, it will tell you that there is actually a keystroke to toggle Tandem on or off.  I never pass up an opportunity to talk about Command Search.

IGNACIO:  No, you’re right.

MOHAMMED:  Because it’s a very interesting and good feature.

IGNACIO:  It is so helpful.  There’s so many useful things that we don’t, that I don’t use, I should say, not generalize it.  My second, the last question is, and this may have come up before, there are times, and not always, when I’m typing an email, typing away, and all of a sudden there’s no response.  I can get absolutely nothing.  I get blank, blank, blank.  And it turns out it happens when, not always, when I press the letter G.  And I go back, after I, you know, once I start, I’m able to read the text again, and the G’s not there.

MOHAMMED:  Could it be the game bar?

RYAN:  Mohammed, I think it’s the game bar.  I was trying to remember what it is, but something is triggering the game bar in Windows.  But I don’t remember what the solution is, but I know I have heard people describing this issue, and I believe there is a solution.

MOHAMMED:  Yes, you can turn it off somewhere, but I also have forgotten where exactly.  I exactly, when I got this, when I got a new machine a while back, I had exactly the same problem.  Turn it off somewhere, but I don’t remember exactly where.

IGNACIO:  And that’s my problem.  I did get a new laptop, so maybe that’s it.

RACHEL:  So I think that this is in the same place as the system overflow settings because I had to go to the same place to turn off the Teams that is not for work or school.  And so there’s a place where you can go and turn off the game bar, what they call is Windows Chat, which is the other version of Teams, et cetera.

IGNACIO:  Well, thank you so much.  And Rachel, congratulations, I mean on all the training stuff that you and your colleagues do.  I’ve been attending a lot of them, either live or on-demand, and they’re great.  So thank you.

RACHEL:  Thank you so much.  I’m glad you find them useful.  And I think we are at the end of our time for the evening.

RYAN:  Rachel, can I, let me share one thing that I just have to get out of my system here.  So when we talk about Face in View and Picture Smart AI, I don’t think we’ve really done a great job yet of publicizing that we’ve actually merged the two in a use case.  So in Face in View, when we launched it back last year with the initial 2024 release, you could use Picture Smart at that time to try to describe the background that was in your camera view, but it was using the old Picture Smart technology.

And so when we put Picture Smart AI out in March, we did connect it to that.  So, and we actually added some special prompts to help describe, tell it to describe your background more than just your face.  So just a shout-out to these commands.  So if you start Face in View with INSERT+SPACE and then F and then O, so once Face in View is running, then you can press INSERT+SPACE+F and then P for Picture Smart, and it will launch, and it will take a picture through your camera and describe to you more about your background so you can tell what’s in your background.

So I did it for me just a minute ago, and it was describing the curtains and the blinds that are in the window that’s sitting here behind me.  So this is another good one.  If you’re on camera a lot and you’re using Face in View, you can use Picture Smart AI now as part of that to help you describe the background more.  So just want to give a shout-out to that.  I think it’s a nice little tip that helps us be more productive and make sure that we’re professional when we’re on video shots. 

MOHAMMED:  Yeah, it’s pretty cool. 

RYAN:  Thank you, this is good.  Thanks, Mohammed.  And Mohammed, you made it awake.  It’s like, what, 2:30 in the morning or something there?

RACHEL:  Oh, my goodness.

MOHAMMED:  It’s 2:30 in the morning, and it’s currently Ramadan.  So I’m actually going to eat, and then after that going to sleep.  And I hope, Ryan, you don’t mind if I start work a little bit late tomorrow.

RYAN:  I think you’ve got a pass to start a little bit late tomorrow.

RACHEL:  Yes.  I hope so.

RYAN:  Definitely.

RACHEL:  Well, thank you for joining us, everyone.

RYAN:  Thanks, all. 

MOHAMMED:  Thank you, everybody. 

RACHEL:  Thanks.

RYAN:  Bye-bye.

GLEN:  Thanks.


edigitaltranscription.com  •  04/07/2024  •  edigitaltranscription.com