در حال حاضر محصولی در سبد خرید شما وجود ندارد.
Artificial intelligence (AI) can have deeply embedded bias. It’s the job of data scientists and developers to ensure their algorithms are fair, transparent, and explainable. This responsibility is critically important when building models that may determine policy—or shape the course of people’s lives. In this course, award-winning software engineer Kesha Williams explains how to debias AI with Amazon SageMaker. She shows how to use SageMaker to create a predictive-policing machine-learning model that integrates Rekognition and AWS DeepLens, creating a crime-fighting model that can “see” what’s happening in a live scene. By following the development process, you can learn what goes into making a model that doesn’t suffer from cultural prejudices. Kesha also discusses how to remove bias in training data, test a model for fairness, and build trust in AI by making models that are explainable.
در این روش نیاز به افزودن محصول به سبد خرید و تکمیل اطلاعات نیست و شما پس از وارد کردن ایمیل خود و طی کردن مراحل پرداخت لینک های دریافت محصولات را در ایمیل خود دریافت خواهید کرد.
Java Persistence with JPA
Prompt Engineering for Improved Performance
Evaluating and Debugging Generative AI
Creating GPTs with Actions
دوره یادگیری Java EE: Web Services
Building a Project with the ChatGPT API
کورس یادگیری کامل مفاهیم AWS Identity and Access Management IAM
Java EE 7: Web Services
OpenAI API: Embeddings
آموزش بایاس زدایی از هوش مصنوعی بوسیله Amazon SageMaker