EPISODE 12 - Med Device Software: Agility and Quality with Rightley McConnell
In this episode, Rightley McConnell, VP of Operations at Precision Systems, Inc., covers how to maintain quality and speed when developing medical device software.
Rightley and Dan discuss:
- Challenges to device software development for complex systems
- Use of AGILE methodology and how to remain flexible in a regulated environment
- Applicable standards, including 21 CFR 820, ISO 13485, IEC 62304, & ISO 14971
- How quality can and should remain central at each development phase
- And more
If you have any questions, please feel free to contact us or connect with Rightley.
Dan Henrich: Hey, Rightley, thanks very much for coming on MedTech Mindset.
Rightley McConnell: Absolutely. Thanks for having me.
Dan: It's great to have you here and have PSI involved. Just so our listeners get to know you a little bit, can you quickly introduce yourself and PSI, and tell us about the different types of projects that you guys work on in MedTech.
Rightley: Sure. Starting with the company, we're Precision Systems Incorporated. We've been in business since 1979, and we really focused on mission-critical, safety-critical systems that cannot fail. So we're writing code that goes inside devices that are in operating rooms, in vitro diagnostics. They could also be a life sustaining type three types of devices, infusion pumps in the like. We'll do consultation requirements, design, the software development, unit testing, final V&V, and anywhere, pretty much along the line, also post-market. Once something gets out, if a customer needs some changes, we can help them with controlling that and updating the code, getting it back out to the field. So I'm the Vice President of Operations here. I have been for about four years now. And I'm a recovering engineer. I started here as a software engineer. It was just my second job out of school, and so I've been here about 14 years.
Dan: Great, great. And how about the different areas you touch within the product development process? So you mentioned the types of devices you work on, but what are the main roles that you play as you interact with clients?
Rightley: A lot of the roles that we play at the very early stages of product conception could be helping with getting requirements for even the product, in place. And then also working through, software requirements on down you might say. So helping those customers with understanding how to roll product requirements into software requirements, how to make sure that they have all the right planning in place, how to make sure that the software is properly designed first, then implemented well. And it's really full stack from there on down. And then just of course, making sure that they're following all of the right regulatory procedures and have all the right SOPs in place, so that when they get through and they submit to a 510(k) for instance, to the FDA, that all the documentation is there.
Dan: Okay. And we're going to delve into our theme here shortly, which is, how to ensure speed-to-market, operating in an agile environment while maintaining high quality standards throughout the software development process. But can you just talk broadly a little bit about the main challenges that that folks developing software for critical medical devices face throughout the process?
Rightley: Sure. There's two main paths that the challenges come down. You have the human, the soft challenges, and it usually deals with education and making sure that folks understand when they go down this path, that the regulatory, the design, the documentation, everything that goes around the process that goes around the actual product development can be as much or more than the product development itself. So a lot of the challenges stem from that, and understanding and planning far enough out to make sure that they have realistic timelines and or able to get the right resources, the right stakeholders involved early enough, so that they have a product development that goes smoothly, as part of the overall development that has to be done in the company. So that's one. And then of course there are always technical challenges.
Rightley: Software is sometimes nothing but technical challenges on its rougher days. So a lot of the types of challenges come down the line with interfacing known hardware to unknown hardware. Interfacing different devices within a product suite. Anytime you have two different components or devices talking to one another, there's opportunity for challenge. So usually, code within a system can be self-contained, and controlled, and the development is, I want to say free of surprise, but it has a certain level predictability to it. As soon as you start interfacing different systems together, and some are off the shelf, and some are custom, and some you're developing as part of the product, that's where technical challenge usually comes in.
Dan: Talking about this, maintaining high levels of quality through the development process, what are the applicable standards that come to play in your day-to-day?
Rightley: So for medical devices, everything is really dictated and flows down from the FDA. And that's 21 CFR part 820. And that really talks about overall medical device development and quality management. From that, we have gotten our quality management system because we focus on software, is really based around a standard called IEC 62304, which is a software development life cycle standard that is for medical devices. And so 820 flows down in points two 62304, as an appropriate set of standards to use for development of software.
Rightley: So our quality management system here at PSI is built around that, and that quality management system is also appropriate and we have been able to be certified to ISO 13485. So that's the medical device manufacturing standard. We're manufacturing code. So our part of the system is just the software, so we are able to be certified to 13845 because we follow good manufacturing practice, which is 62304. So there's a bit of a web of standards, but it really all flows down from 21 CFR 820, and that points to all the different standards that are appropriate for different aspects of product development.
Dan: Sure. And there's one other that you didn't mention that I just want to highlight, because I think it'll come up which is ISO 14971, having to do with risk management. Can you talk a little how that plays into your process?
Rightley: Yeah. Thank you. So a 14971 as you mentioned, talks about risk management. And it's a risk-based approach to doing software development. So 62304, and 14971 really play together. So it's all about identifying and mitigating risk, early in the product development process so that you can flow that down into your software development process, and make sure that you're focusing on designing, developing and testing the right parts of the system, and really making sure that you're maintaining that very high level of quality without testing everything needlessly to the same level that you might need to test the very most critical parts of the system.
Rightley: That also helps you mitigate technical risk as well. When you're doing that failure modes, and effects, criticality analysis, FMECA, which is prescribed by 14971, you're going to identify technical risks in addition to patient risks. They're just going to come out as part of the process. You set those aside, really, you focus on 14971 on the patient risk, but there's technical risks also need to be examined. So in mitigating those risks early on, as part of a phase zero and doing that initial investigation into what the technical risks are, really can pay dividends down the line, and it really helps maintain schedule and keep your development on track, with fewer surprises down the road.
Dan: So let's turn to the turn to the meat of our discussion, which is how to ensure speed-to-market, maintain an agile process, and maintain high quality standards throughout the software development process. I know every company who's doing this type of work is going to follow somewhat of a phased approach, whether it's Archimedic, PSI, or other players in the industry. But can you just walk us through your software development process by phase, and talk about the different types of activities that take place there, and how you operate to maintain quality, and move quickly?
Rightley: We look at it as six phases. So our phase zero is investigation. Phase one is planning. Phase two is requirements realization. Phase three is V&V. Four, regulatory and product launch, and then five is post-market surveillance. So those are the six main phases that we go through. Phase zero, and the way that we really apply, we look back at that FMECA right away, and we start looking at what things do we need to focus on, and what risks do we need to mitigate from a technical aspect and also from a patient-risk aspect. And phase zero, really, the main goals for us in the software development aspects, are to come out of phase zero with a good software requirements spec, a good architecture, and usually that's expressed in an architectural design chart, and good product requirements. So those are the main three things from the software aspect that we try to get out of phase zero.
Rightley: Those are what really sets you up for success later on down the road. Requirements, we here at PSI, and I think most anybody that gets into any product development, know that requirements are the source of everything. It's either the source of your problems or the source of your success. Having good product requirements means you could flow down to good hardware requirements, good software requirements, and that means that all the different parts of the system are being developed in harmony. So that's really the goal of phase zero, is to walk out of it with all the stakeholders, pretty much understanding what the system needs to do in order to fulfill its goal and help the patient.
Rightley: So phase one is planning. Planning, planning, planning. So it's all about making sure that you can turn those requirements from a software perspective, into designs you can lay out. Sprint schedules, this is where the AGILE approach starts to come in, and where you can really plan out, now that you know what the product is really going to be, you can lay out how long it's going to take to get there, and how you're going to develop it. So the main thing to understand about planning is if you don't have a plan, you can't change it. So having a plan and a work breakdown structure that's based on the requirements that flows down into sprints, and usually we set them up on about a monthly basis. Different companies find different things more useful. It could be six weeks, it could be two months. It depends on what the development cycle of the rest of the product is. But we find that setting up the sprints on a monthly basis right there in that planning phase is what really allows us to be agile and keep things moving throughout development.
Rightley: There are always going to be roadblocks. There's always going to be something that's going to require you to wait on developing a certain aspect of the system. That's why having the sprint plan is so great, because you move something from sprint three back to sprint one, and vice versa, and keep the whole project moving, even if one particular part of it is not really allowing you to progress. That's a lot of the difference between the traditional V model, Waterfall model of software development, and applying some added agile methodologies within and overall SDLC or software development life cycle methodology.
Rightley: So phase one really, the biggest thing there is to make sure that you have a plan. Break down the work, lay out a sprint schedule, and know that it's going to change. So during that phase, it's also a really good idea to understand how changes are going to be managed, how problems are going to be reported. All the SOPs, and all the standards that really are around product development, that's the point where you make sure that those are in place, and all the stakeholders understand and agree those.
Rightley: Because that, sometimes may seem a little bit just doing boring paperwork, but you wouldn't believe how many times sitting down and getting all the stakeholders, software guys, hardware guys' management, marketing, they all think a little bit differently, as you might imagine. And so having everybody sit around and agree to a plan about who's responsible for what at what points, how changes get managed, when there's a design freeze, laying that out all up front really helps get everybody on the same team. And good planning and a good start like that in that phase one is critical for letting all the different teams go and do their work and then come back together and make sure that everything plays together well.
Dan: Right. So phase zero, maybe you would say the big deliverables of that, or the hardcore requirements ... Maybe I should say detailed requirements both from a-
Rightley: Product requirements, software requirements, hardware requirements, electronics, those parts should all be really worked out as part of that first investigative phase.
Dan: And your Phase One deliverables will be a laid out schedule of sprints that everyone agrees to adhere to. And there is a plan for change management in place. Is that correct?
Rightley: Absolutely. You must plan to change. And that's also when the mechanical guys, the electronics, everybody makes sure that their schedules mesh together. It doesn't mean that everybody gets started, runs off and begins doing work immediately as soon as that phase is closed out. It just means that everybody's plan is laid out about when things have to be done so that nobody ends up being left behind on the critical path, and then playing catch up.
Dan: Right, right. Okay. So let's move into phase two where I think is where the rubber really starts to meet the road with software, and all the quality standards that come to play, right?
Rightley: Right. So this is where, as you said, the rubber meets the road with software development. Phase two, design realization, is, in software coding and parlance, this is implementation for us. So this is where we start executing on those sprints. We open every sprint at the beginning of the four weeks, let's say, we have goals set out for what we're going to achieve during that time and make sure that the sprint deliverables that we've set up are possible, because that's a great time to stop and evaluate and figure out if we're waiting on a piece of electronics to get there before we can write a little bit of code, or if we're waiting on the marketing group to give some answer about a certain thing before it can progress, that's where we're trying to make sure for that sprint for that group of work, we've got the things that we need in order to actually execute.
Rightley: So for that, let's say four weeks, we're writing code. So if you're more so in the software world, you may have heard something called Test-Driven Development. There are some aspects of that in there, where essentially you're writing automated code along with the software so that you know, as you write functions or groups of functions, that they do the operation that they're supposed to, and then when you write additional functions to interface with them, you're not breaking them. So it's really, really important that during that sprint, while writing the code, write the automated unit tests along with it. It really helps to ensure quality. It helps getting your speed to market, and it also allows you to make other changes in other sprints elsewhere, without fear of breaking the code you already wrote. So it really sets you up for success in being able to apply those agile tenants, but not have to worry too much about what you've already done previously.
Rightley: So throughout the sprint, we're developing and then towards the end of the sprint, there'll be a cut off. So it could be somewhere, usually in like week three of the four week, or it could be a little bit later, that's where we'll do any sort of integration, like a little bit higher-level testing, to make sure that everything that we've developed over the sprint functions properly. The idea, and what we found that our clients love is that they want a sprint-deliverable at the end of month, that is functional to a point, and everything in their works, because they want to be able to show progress. And this is great for an internal teams and external teams like us, alike, being able, for software, which is a mushy, amorphous thing that a lot of people think is a little bit of black magic, being able to show deliverables on a regular basis and be able to report about what is done and what's functioning, and be able to say, "Yes, this works", gives everybody else in the rest of the team a nice warm fuzzy when they can see that progress from month to month.
Rightley: So really aim that at the end of the sprint you're getting something that is well tested and works to the prescribed level, and then really that's where we iterate through the phases. Just same process, opening the sprint, doing the work, closing the sprint and on and on and on. And throughout that at the beginning of each sprint, like I said, we're making sure that everything that we have is available to us, that we can execute the sprint, and it allows us to be agile, move things from sprint to sprint, and try and keep the overall workload as flat as you can, because as you establish a team, you want to keep that team working on the project, you don't want to scale up and scale down. Being a bit more agile allows you to do that most optimally, and most efficiently, and keep the fastest way to market is to keep a nice level workflow. So that's what the sprints and being able to reorganize them as you go allows you to do.
Dan: Let's jump into phase three, which is I think where a lot of the requirements and documentation really come into play, of how to, how to ensure the speed-to-market while maintaining quality.
Rightley: Right. So we talked a little bit about, in the previous phase, we're doing automated unit testing, which is one kind of verification. But the vast majority of the V&V, verification validation goes on in phase three. So that's where at the highest level you're taking the requirements that you developed early on, along with the FMECA, and getting the-
Dan: Sorry. One second. Before we go on, just tell us briefly ... You mentioned the FMECA at the beginning, but just real quickly run through what that is and how it comes to play.
Rightley: Sure. So what that essentially is, you could think of it as almost as a table or a spreadsheet and it often is organized as such. And it's a listing of every risk and harm that can happen to the patient. There are entire standards and podcasts that could probably be done about how to do an effective FMECA. But really what you're concerned about is getting all of the patient risks listed out, then understanding what's the likelihood of that risk happening? So is it a one in 10 chance, or one in a million chance? How likely is it to happen once something gets into the field is being used by the patient? And not to mention that other stakeholders as well, especially like in, perhaps in in vitro diagnostics, there could be handling of blood where you have to be concerned, not only about, could you get a wrong result for the patient, but there are operators who have to handle blood. So you need to be thinking about the other stakeholders that are involved in the process as well.
Rightley: So what's the likelihood of this harm happening to them? And then also, what's the severity? And understanding if something happens and it's a minor inconvenience, that's something that you can test to a certain degree, and you have to understand, you may not want to spend a man-year of development, in testing something that might cause five minutes and then inconvenience to somebody every 100 operating hours of the system. But things that could happen perhaps very rarely, but are very serious, you need to spend some time mitigating.
Rightley: So the whole idea is that once you understand the criticality and the effects and the likelihood of something happening is how do you move those risks and mitigate those risks? What actions do you take throughout the rest of the development process? And especially in phase three when you're doing a verification validation, to understand that those risks have indeed been mitigated and you're not going to hurt somebody when this thing gets into the market. So that's the main goal in FMECA.
Dan: Great. Okay. So back to what you're saying about bringing the requirements in the FMECA into play here at phase three, let's jump back into that.
Rightley: Sure. So a really, at the highest level, that the FDA generally prescribes three levels of testing for software. There's unit testing, there's integration testing, and then there's system-level testing. Unit testing we talked about, and that's at the lowest level. We prescribe automated unit testing that is executed, it's built alongside the code. That happens in phase two. Then some integration testing, where this is where you're basically integrating components of the system. It could be software to hardware, it could be multiple software components together, you're checking that they inter-operate correctly. So you're making sure that during that phase, the software all plays together nicely with the other components of the system and itself. That can be done in an automated fashion, perhaps. It can also be done in a manual fashion. It can be done at the sprint level, but usually it happens more so long in phase three at the V&V phase. And then of course, there's system-level tests. And system-level verification is the one that I think, the vast majority of what people understand verification to be, that's where that happens.
Rightley: So that's where you're looping back, looking at the software requirements that you developed early on in the system. And you may have modified and updated a bit through the design realization, but making sure that the system hits all of those requirements. And that by nature of hitting those requirements, it's fulfilling the product requirements, and fulfilling the intended use to the customer, to the patient. So that's where the vast majority of V&V activities come into play. That's writing and executing those step-by-step tests.
Rightley: You can write a lot of your system-level verification very early in the process, in the planning stage, and it's highly recommended to do that. But quite often you'll have to update those as you go through design realization, and you get to that final place where you're going to start doing dry runs of your system-level verification, and then doing the official run of your system-level verification on the software.
Rightley: The FMECA really comes into play there because you need to make sure that you're not only testing that mitigation that you came up with via the steps, but you're also making sure that you're focusing in the right areas of the system. You may write many test plans that are, let's say for instance, I'm talking about an in vitro diagnostic device. Just theoretically, you're looking for some type of cell in a blood sample for instance. That's the whole purpose of the device. Just as a theoretical, for instance. Your FMECA and therefore your testing is going to dictate that you spend a lot of time verifying, and later on validating that the software indeed is able to find with high levels of specificity and sensitivity, the particular type of blood cell that you're looking for. That's very important, and a lot of your tests should be written around that of course, when you think.
Rightley: But on the same hand, you need to verify that you can quickly and easily flow through all of the screens in the workflow without having any problems with clicking and navigating from screen to screen. So it could cause a little bit of confusion or if there were some problem where you couldn't navigate from screen to screen, that can be a real inconvenience. Especially if it's one of those things that it might happen one time out of 100. It's an annoyance. But worst case scenario, you run the test again.
Dan: It's not the same as having a false negative for HIV blood test or something like that.
Rightley: Yeah. A false positive or worse, a false negative. Right?
Rightley: So that's where you need to focus in that V&V stage when you're writing those verification and then later on for the overall product validation, where you're really focusing on the items that are really high-criticality and likelihood to occur from the FMECA. There's another good point to point out too, that the quality of your software requirements really dictate how much trouble you're going to have when you get to this stage. I think of it as there are “four Cs” for a good software requirement. And these will be; complete, correct, concise and you need to be able to confirm it.
Rightley: So complete, as in, each requirement in your software requirements needs to express a complete thought. It might mean that there is a button on the screen that does this. There is the ability to handle the intake of a patient sample. There are workflows that hit the patient data input, and the screen outputs. Those all might be different requirements, but each requirement should be a complete statement or thought. Just like we learned back in grammar school that each sentence should be a complete thought, so should each requirement.
Rightley: It needs to be correct. That seems obvious, but it needs to be reviewed that it's correct and it doesn't conflict with other requirements that are in the document. You wouldn't believe how many software requirements documents that we've looked at or that we've seen, where you can pick out two or three in the same section that all conflict with one another. It’ really helpful to get people that are not even necessarily deep into the software development process, to give those a look through and make sure that it makes sense to them. Good software requirements should make sense to pretty much anybody that reads them.
Rightley: So they should be concise. One of the biggest places that we see software requirements problems are in big long narrative requirements, when it really should be broken up into maybe 10 or 12 different requirements, instead of a paragraph. Testing a paragraph in phase three with software, with concise followable steps that can be repeated, trying to test that the requirement has been met when it's 15, 16 sentences long is really, really difficult. And it leaves a lot of openness to interpretation. So it's really important that your requirements be concise.
Rightley: And finally, they need to be confirmable. So having good requirements that are testable, that don't say things like the system must run indefinitely. You can't test assist them indefinitely. So how would you ever know if you met that requirement? It shall be easy to use. How is it easy to use? That needs to flow down into very specific requirements, because you can't test, is the system easy to use? It's great to have those goals of being easy to use and being 100% uptime, but you need to have a confined confirmable requirement that you can test. If you follow those four Cs at the beginning in phase zero, it makes phase three way easier. So that's big tips for verification, validation software.
Dan: Great. Okay. So phase four where we get into the regulatory submission, maybe not a particularly time-consuming part of the process, but it's where you find out if you have done phases zero through three correctly, right?
Dan: So tell us a little bit about how the standards come into play. And obviously, if this phase doesn't go well, then your time-to-market is going to be set back considerably.
Rightley: Considerably. Yeah. So this is where all the previous phases really pay off, where you find out what you didn't do right, as you said. So most of the software team's role as we found, when it comes to this, is helping to put together the submission for the 510(k). So that means going back through making sure that your traceability from requirements through design, through implementation, through test is all complete. Making sure that you have all the prescribed documentation. The FDA is great in that the regulations are out there. You can find what needs to be submitted for a 510(k), just by going to the FDA website. And they list off all the documents and everything from a software perspective that you need to have. Even list off the types of testing that we were talking about.
Rightley: That stuff is all there and when you're trying to put together the 510(k), that's when you find out whether or not the software team did their part. So that's where you can either find that we're going to have a nice 90 day window, where the FDA is reviewing our submission, or we're going to be sent back to the drawing board several times to, hopefully not recreate, but find in our documentation package, in the code, where we tested this, where we explain that, and how we did everything.
Dan: So let's talk about…you get sent back to the drawing board, which from time to time may happen even with the best laid plans. Where does the AGILE methodology come into play, and how do you go about relaying out, getting a tourniquet on this time-suck, to ensure that you're addressing those things as quickly as possible and that it's going to go through the second time.
Rightley: Sure. There's nothing saying that you can't apply these AGILE methodologies to the requirements and the documentation and the design phases as well. So it's like setting up a new sprint. When something gets identified and you can't just go back and point to in the document where that item is discussed, if you need to do additional mitigation, you need to do additional documentation, you need to have some additional processes set up or some SOPs put into place, that's like another sprint in the AGILE methodology. So if you do it very much the same way. As I mentioned before, it's all about taking the sprint inputs, figuring out, "Do we have everything that we need? Who are the stakeholders that we need to pull in? How do we get consensus around the work that needs to be executed?"
Rightley: You plan that sprint's work. And that could be documentation work, it could be design work, it could be development, it could be retesting or testing further something, making sure that a risk has been mitigated, and then you execute on the sprint. So you're really following that same opening, working, closing the sprint methodology that you would during the whole development phase. So it's all about planning your work, executing the work, and making sure that the work that you did is well tested and integrates well into the rest of the system. And it could be software, it could be documentation, it could be anything.
Dan: So phase five then is the post-market phase, right? During which time you're required to conduct post-market surveillance for your device. Right? Monitor what type of adverse events might be occurring, analyze their severity. Talk to us a little bit about the software maintenance and monitoring and retirement phase.
Rightley: So the planning for how this is handled is actually is back in phase one. So how you're going to handle change control, how you're going to handle a configuration management of the software, how you're going to handle when something comes in from the field, evaluating and going back through, perhaps adding to the FMECA based on information you get from the field, and then flowing that back through the process. So again, I don't mean to sound like a broken record, but you're really going to, again, apply that methodology, that AGILE methodology again, of evaluating what the information that comes in from the software perspective. Do we have to then flowing back through the requirements? Does it mean we need to change a requirement? Do we need to test a requirement differently? How does this affect the requirements? How does this affect the design? Where, if anywhere, do we need to make a change in the code? How does all of this get tested? And then how do we rerelease this back out into the field?
Rightley: So it starts back at that FMECA, and it flows back through the whole process, but you set it up like a sprint. So what are your inputs for the sprint? What's the work you need to do and what's the output? How do you close the sprint out? So it's all about change control and having those SOPs and having those standards in place, before you ever get to that point. You don't want to be scrambling to figure out how are we going to handle this customer complaint, and this adverse effect reported from the field, because you don't have an SOP in place. A worst case scenario, somebody hears something like that or a complaint comes in, and it just gets dropped or it's not handled, or the software is never looked at because there's no SOP. There's nothing in place for how to handle something like that.
Dan: So let's talk a little bit about what MedTech Innovation teams ought to have in place when they approach a software development partner. We're talking starting in phase zero here. What do you expect them to have in hand, besides the funding? What do you expect them to have in hand when they come to you and say, "Rightley, I need your team to develop this system for me"?
Rightley: Sure. Well, there's usually difference between what we would like to have and what we expect that they have when they come. But we tend to help and get involved with anything from product requirements on down. Ideally, somebody comes to us and we've seen this a lot with startups, especially with serial entrepreneurs, and non-first time founders. They've been through this before. They understand what the outputs of a good phase zero, or it might be called a phase one in what they're used to. But with the outputs of that are, and they come to us with software requirements that follow the four Cs. Or maybe they just need a little bit of a review and some questions to answer and they're tweaked and we're good to go. That's ideal. And we've had some great customers that are startups. And again, usually it's maybe not the founder's first startup, but they come to us with those requirements.
Rightley: What we oftentimes will get, is a long form narrative set of product requirements, and usually an explanation that goes along with it, and a fair amount of data and science behind it that says, "Here's the medical problem that we're tackling." We get a lot of that. So a very often we will help them in forming those long-form narrative type of product requirements that are based in science and medicine, and starting to flow those down into short, testable product requirements, and then software requirements and on down.
Dan: Let me ask you a question that I'm sure a lot of your clients face, a lot of our early stage company clients encounter it at Archimedic, an early struggle is assembling enough funding to hire a vendor like PSI, like Archimedic, to help them develop their product. And they want to make sure that they are in the best position they can be, during all that time when they're raising funds to be ready to start. What should a team be doing to get ready to launch into this process, with a software vendor?
Rightley: In preparation for being able to bring a software vendor on board, or even if they're choosing to hire software folks and bring them in house and have them as FTEs, be prepared by having at least identified somebody that's in the regulatory realm, that understands the regulation that's around their device. And maybe they haven't actually engaged them yet, but have them on tap and know that you've got somebody that understands what regulation, and how it is going to apply, and that of course will flow down into software. Because we understand software very, very well, inside and out, but we don't always understand the overall medicine behind it, and how the risks of software can actually flow upward to patient risks, and how the FDA is going to look at those.
Rightley: So that person is really, really important, whether they're in house or somebody else, you need to identify them. Having a good understanding of the addressable market and the product requirements, what the product needs to do is often oftentimes overlooked. And they could be long-form narrative requirements. But have something that at least, your internal stakeholders all understand and agree, because I can't tell you how many different times we've been into what was supposed to be a software kickoff meeting, and some of the very basics about system operation and what the device must do, are still being hashed out around the table. So having all the stakeholders internally agreeing about what market they're addressing, how the product's going to be developed, what the product's going to do, very important.
Rightley: And then also before coming and looking for software vendors, review those standards that we talked about. You don't have to be an expert. You don't have to know 62304 inside now. But it's written in fairly plain English. And you don't have to read and understand every aspect of it, but the standards are all out there. Evaluation copies can often be obtained of like 62304, and some of the paid standards, just for educational purposes. But reviewing the 21 CFR 820, reviewing 62304, or at least understanding the overall software development life cycle, and reviewing ISO 14971, and what goes into risk management are a huge leg up in the understanding what's about to come, what's going to be in this process and understanding the overall effort that's going to have to be applied around not just the product development itself, but everything that goes around it.
Dan: So one thing that we haven't talked about but I'm sure is on a lot of listeners minds, talking about ensuring quality through the software development process, is cybersecurity, right? More and more devices are Internet connected. And there's risks of malicious or unintentional interference with software that may be critical to a patient's health or data security. Talk a little bit about how does that tie into your quality processes.
Rightley: Sure. So no matter when you're listening to this, there's always going to be a recent data breach that keeps this fresh in people's minds. And we get this question all the time. And it could be a podcast or an episode all into itself-
Dan: And I think it will be.
Rightley: But from a software perspective ... And there are a lot of facet to data security, cybersecurity. It's not just software. There are hardware aspects. There are a lot of different ... There's a lot of network and infrastructure aspects that go along with it. But where it really comes into a lot of the software development that we do is it's ... It's good best practices really are your best protection. You can go way into the weeds and there's a lot of things that can be done, and security that can be added on top of a system, or built around a system, or in the network infrastructure where the system is connected. But a lot of it is just best practices. And unfortunately, a lot of the things that we hear about of cybersecurity problems with devices in the field, is where best practices just weren't followed.
Rightley: So those are things like, again, I hate to harp on requirements, but understanding back even at their requirements phase, who needs what access to what data when? And how is it tracked, if any of that access is made, or if data is changed? And how is that data protected? So is it encrypted when it's sitting on the chip inside a device, or on a server or in a hard drive in a PC? Is it encrypted when it's being transmitted across the network? Even a local network, is it an encrypted then? Is the appropriate level of access and password protection, and changing of passwords and everything, is that all built into the software from the beginning? So really, those are all best practices that that should be followed throughout the requirements, the design, and then of course in implementation. And then of course when you get into V&V, test them. Make sure that you're doing a bit of penetration testing, make sure that you're doing a bit of that button pushing, and trying to find those unintended consequential effects which may allow somebody access into a system.
Rightley: It's really hard to test everything of course, but really following good practices and shrinking your attack vectors is your best insurance. You can never be 100% sure that you're invulnerable. It just doesn't happen. There's always vulnerability. The main thing is to make sure the devices and the things that you're working on, have the smallest attack vectors possible. And that's really your best insurance. It all comes from just following good practices, good requirements, good design, and using good off the shelf technologies and components that are well supported by the industry, and they're being tested and proven everyday in use.
Rightley: You're right. You're trying to eliminate surprises. That's the whole point. So the whole point of applying this AGILE methodology throughout, and while maintaining the quality around the requirements is to eliminate surprises. You're trying to mitigate those risks as many as you can figure out upfront, and you're trying to make sure that each sprint you deliver has increasing levels of functionality and that they're really no surprises from the previous one. So that's how you're really ensuring the speed-to-market. You can only make development go so fast. But you can try and eliminate as many surprises as you can, and try and make sure that you're getting to market when you think you are, even if it's not as fast as you had initially hoped.
Dan: So the term AGILE comes up a lot. I think people throw it around very casually and I think it's leaked out of software into other disciplines. And there's AGILE everything. What do you mean when you say AGILE methodology and why is it so critical to integrate it into the development of a Med device software system?
Rightley: Sure. I think AGILE has gotten somewhat of a bad name over the years in the software development community because I think quite often, AGILE, and as I've heard it put, is not spelled A-D H-O-C. It's not just an ad hoc methodology that allows us to develop a 'fly by the seat of their pants' and figure out today what they're going to develop today. That's not what AGILE is or supposed to be. There are a lot of flavors of it. And there's a lot of different ways you can practice it, but they all have a lot of similarity in that it still means getting all of the stakeholders together, getting them to agree on what's being developed, but being flexible in the way that you develop it. And I think that's what we've really tried to implement here at PSI.
Rightley: I've heard somebody called it that it's the [wagile 01:20:41] methodology, a little bit what we practice because Waterfall methodology, if you've heard of that in software development, and that comes from electronics actually. So you add the Waterfall with requirements and design, you put that together with a sprint-wise development during the implementation phase, and you have [wagile 01:20:58], I've heard it called. So the way that, I think, like I said, it just gets a bit of a bad rep. And I think that it can be applied in a lot of different disciplines effectively. You could speak more to electromechanical than me for sure. But it needs to be built and deployed within a framework of good stakeholder agreement, and requirements, and knowing how you're going to test it on the backend to make sure that that thing that you developed agilely actually does what it's supposed to do.
Dan: I think we're nearing the end of the time you've promised me. So I really appreciate Rightley, taking the time to come on and talk with me. If it's all right, we'll put your contact information in the blog post when we post this episode, and folks can get in touch with you if they want to if they want to pick your brain a little bit more.
Rightley: We're always happy to help. Getting somebody off to start the right way, it benefits everyone. So we're always happy to take a phone call and help someone out.
Dan: Great. Hey, well thanks very much Rightley.
Rightley: Thanks for having me.
Dan: Take care.