On the other hand, Matt Fisher, the general counsel for the virtual care program Carium, pointed out this impact. The effects can leave the organizations quite open to many possible liabilities. Fisher further explained that it is one of the emerging areas. There are usually a bunch of various questions about where the risks and liabilities may come up.
Fisher has described two of the main areas for legal concern. He has pointed out bias and cybersecurity. When it comes to the fact of cybersecurity, the issues are not that much with the final results of using a model with the period of training it. In case the bugger companies are contracting along with a healthcare system, they will work on developing a new system to analyze all the data and to produce further outcomes.
All this data can represent a target for further bad actors. In case a health system is transferring safe health information to a big tech company, not only will it offer privacy risk, but also the security will compromise. Thus, they need to ensure that the systems have a protection-proof design against the attack.
On the other hand, a breach is a concern, when not if, Fisher said. Fisher has said that synthetic or de-identified data offer options to help in reducing the risk when the sets are training sufficiency.
Anyone who is working with sensitive information must be aware of the fact. In case a device is relying on a biased algorithm and results in less than an ideal outcome, then it leads to more claims against the manufacturer. Research has further shown that the biased model might get worse and disproportionate to the effect of endemicity.
Fisher further hopes the panel attendees to leave the discussion for engaging in disclosure about the legal risk. He said an organization could take steps to reduce liability. Also, it is possible to fully shield yourself from the risk of legal action.