Develop and implement AI models and algorithms: Design, develop, and deploy machine learning and deep learning models using OpenAI technologies and frameworks. This includes data preprocessing, feature engineering, model training, and evaluation.
Collaborate with cross-functional teams: Work closely with data scientists, software engineers, and domain experts to understand project requirements, gather data, and develop AI solutions that meet specific business needs.
Conduct research and stay updated on AI advancements: Keep abreast of the latest advancements and trends in the field of AI, particularly in natural language processing, computer vision, and reinforcement learning. Apply new techniques and approaches to improve model performance and efficiency.
Data analysis and preprocessing: Analyze and preprocess large volumes of data to ensure its quality, completeness, and suitability for training AI models. Implement data cleaning, transformation, and feature extraction techniques to optimize model performance. Model training and evaluation: Train AI models using appropriate algorithms and frameworks, fine-tuning parameters, and conducting experiments to improve model accuracy and performance. Evaluate models using appropriate evaluation metrics and statistical techniques.
Model deployment and integration: Deploy trained AI models into production systems and integrate them into existing software applications or platforms. Ensure scalability, efficiency, and reliability of the deployed models.
Performance optimization and troubleshooting: Identify bottlenecks, optimize AI models for improved performance, and resolve any issues or bugs that arise during model deployment or integration.
Documentation and knowledge sharing: Document AI model development processes, methodologies, and outcomes. Share knowledge and insights with team members and stakeholders through technical documentation, presentations, and training sessions.
Maintain code quality and best practices: Follow coding standards, best practices, and version control processes to ensure code quality, maintainability, and collaboration within the development team.
Collaborate with OpenAI tools and platforms: Leverage OpenAI tools, libraries, and platforms effectively to accelerate AI model development and deployment. Stay updated on OpenAI’s latest offerings and provide feedback to improve the tools and platforms.
QUALITY AND REQUIREMENTS
University or advanced degree in engineering, computer science, mathematics, or a related field.
7+ years experience developing and deploying machine learning systems into production.
Strong experience working with a variety of relational SQL and NoSQL databases.
Strong experience working with big data tools: Hadoop, Spark, Kafka, etc.
Experience with at least one cloud provider solution (AWS, GCP, Azure).
Strong experience with object-oriented/object function scripting languages: Python, Java, C++, etc.
Solid understanding of Machine Learning / Deep Learning and the existing frameworks such as Tensorflow and PyTorch.
Ability to work in a Linux environment.
Industry experience building innovative end-to-end Machine Learning systems.
Ability to quickly prototype ideas and solve complex problems by adapting creative approaches.
Experience working with distributed systems, service-oriented architectures and designing APIs.
Strong knowledge of data pipeline and workflow management tools.
Expertise in standard software engineering methodology, e.g. unit testing, test automation, continuous integration, code reviews, design documentation.
Relevant working experience with Docker and Kubernetes will be a big plus.