وب سایت تخصصی شرکت فرین
دسته بندی دوره ها

Introduction to Attention-Based Neural Networks

سرفصل های دوره

Attention-based models allow neural networks to focus on the most important features of the input, thus producing better results at the output. In this course, Janani Ravi explains how recurrent neural networks work and builds and trains two image captioning models one without attention and another using attention models and compares their results. If you have some experience and understanding of how neural networks work and want to see what attention-based models can do for you, check out this course.


01 - Introduction
  • 01 - Prerequisites
  • 02 - What are attention-based models
  • 03 - Attention in language generation and translation models

  • 02 - 1. Recurrent Neural Networks to Learn Sequential Data
  • 01 - Feed forward networks and their limitations
  • 02 - Recurrent neural networks for sequential data
  • 03 - The need for long memory cells
  • 04 - LSTM and GRU cells
  • 05 - Types of RRNNS

  • 03 - 2. Encoder-Decoder Networks for Language Models
  • 01 - Language generation models
  • 02 - Sequence to sequence models for language translation

  • 04 - 3. Attention-Based Neural Networks
  • 01 - The role of attention in sequence to sequence models
  • 02 - Attention mechanism in sequence to sequence models
  • 03 - Alignment weights in attention models
  • 04 - Bahdanau attention
  • 05 - Attention models for image captioning
  • 06 - Encoder decoder structure for image captioning

  • 05 - 4. Image Captioning Model without Attention
  • 01 - Setting up Colab and Google Drive
  • 02 - Loading in the Flickr8k dataset
  • 03 - Constructing the vocabulary
  • 04 - Setting up the dataset class
  • 05 - Implementing utility functions for training data
  • 06 - Building the encoder CNN
  • 07 - Building the decoder RNN
  • 08 - Setting up the sequence to sequence model
  • 09 - Training the image captioning model

  • 06 - 5. Image Captioning Model Using Attention
  • 01 - Loading the dataset and setting up utility functions
  • 02 - The encoder CNN generating unrolled feature maps
  • 03 - Implementing Bahdanau attention
  • 04 - The decoder RNN using attention
  • 05 - Generating captions using attention
  • 06 - Training the attention-based image captioning model
  • 07 - Visualizing the model's attention

  • 07 - Conclusion
  • 01 - Summary and next steps
  • 45,900 تومان
    بیش از یک محصول به صورت دانلودی میخواهید؟ محصول را به سبد خرید اضافه کنید.
    خرید دانلودی فوری

    در این روش نیاز به افزودن محصول به سبد خرید و تکمیل اطلاعات نیست و شما پس از وارد کردن ایمیل خود و طی کردن مراحل پرداخت لینک های دریافت محصولات را در ایمیل خود دریافت خواهید کرد.

    ایمیل شما:
    تولید کننده:
    مدرس:
    شناسه: 1369
    حجم: 278 مگابایت
    مدت زمان: 132 دقیقه
    تاریخ انتشار: 26 دی 1401
    طراحی سایت و خدمات سئو

    45,900 تومان
    افزودن به سبد خرید