About Infactory
We’re Infactory. Our mission is to further AI innovation through facts and trust. Our fact platform powers AI applications for businesses that depend on accuracy. Whether our customers are building chatbots, search tools, knowledge management systems, or bleeding-edge Gen AI technology, our solution ensures responses are not just intelligent, but factual and useful.
Some Of Our Values
• Facts are at the center of everything we build. We hold ourselves to the highest standards of definitive accuracy.
• We develop technology that builds and restores trust.
• We’re not riding the AI wave–we’re reshaping it. We don’t do hype.
• We make great products that are easy, trustworthy and useful.
You Day-to-day Will Often Change, But Ir May Include
• Get to know our customer’s data: Work directly with customers and data partners to gain a deep understanding of their data contents and structures. Act as a liaison between technical teams and data providers, translating business needs into technical requirements.
• Manage data integrations: Write detailed specifications for integrating new data sources into our existing tech stack. Develop robust and efficient code to implement these integrations, ensuring smooth data flow and compatibility.
• Build smart data models: You'll use our frameworks to create queries that find valuable facts in different types of data. These data models will be the foundation for smart designs and speedy processing.
• Work across the technology stack: Use NoSQL datastores, like Apache Cassandra and Google Bigtable. Leverage data analytic tools such as Apache Spark and Apache Kafka to process and analyze large datasets.
Qualifications
Note: These are guidelines, not hard requirements! If you think you’d be a good fit, please apply.
• Degree in Statistics, Data Science, Computer Science, Mathematics,, Engineering, or a related quantitative field.
• 3-5+ years work experience on data engineering or similar
• Experience working directly with customers or stakeholders to gather data requirements
• Proficiency in at least one programming language commonly used in data engineering (e.g., C++, Python, Java, Scala)
• Strong experience with NoSQL databases
• Hands-on experience with big data processing tools, especially Apache Spark and Apache Kafka
• Solid understanding of data modeling principles and best practices
• Familiarity with cloud platforms (e.g., AWS, Google Cloud, Azure) and their data services
Compensation And Benefits
• San Francisco Bay Area/Hybrid preferred, remote considered.
• $120k-150k with equity in an early-stage startup
• Competitive benefits
• 20 days PTO + paid holidays + unlimited sick leave
The Pay Range For This Role Is
120,000 - 150,000 USD per year(San Francisco HQ)
Location: San Francisco, CA
Posted: Aug. 16, 2024, 2:52 p.m.
Apply Now Company Website