The challenges in executing custom AI components within a serverless framework will be addressed, using AWS as a reference. The challenges related to the IP spaces, costs related to attaching and detaching the ENIs to lambda functions, and overall design patterns in a micro services environment will be addressed in this context. Service orchestration flows using step functions that use asynchronous units for processing along with lambda functions will be covered. We identify how we can expose these as service endpoints using API Gateway.
AWS Systems manager Run Commands execute some core processing logic in containers (ECS or Kubernetes). We will cover these scenarios along with the realization of layering security for this framework to expose the AI customizations and logic as service endpoints, and some of the things that can be deployed at the edge.