Job Title: Naimuri - Data Engineer
Manchester, England, United Kingdom
Job Title: Data Engineer
Job Location: Salford Quays, Manchester
Job Type: Permanent, Full-time
Job ID: SF19005
Naimuri is offering the chance to help make the UK a safer place through innovation. We partner with government and law enforcement on some of the most challenging data and technology problems out there, and we're looking for a Data Engineer to join our mission.
We strongly encourage candidates of all different backgrounds and identities to apply. We are committed to building an inclusive, safe and supportive environment that allows everyone to do their best work. We are happy to support any accessibility or neurodiversity requirements that you may need during the recruitment process.
About us:
We’ve been around for about ten years and grown from being a little-known tech start-up to creating our own community at the heart of Manchester’s thriving tech ecosystem.
The name Naimuri is Japanese and simply means…
‘nai’ meaning ‘not’
‘muri’ meaning ‘overburden’
This principle guides everything we do, from our technology and processes to our people and culture. We empower our teams to do what they think is right, giving them the confidence to explore new ways of working and deliver the finest solutions in an agile, bias-free environment.
Our business is focused on 4 cornerstones: Wellbeing, Empowerment, Perpetual Edge and Delivery.
People and culture are at the heart of Naimuri, so that collectively, we can realise our mission of ‘making the UK a safer place to be’.
About the team:
The Data capability team at Naimuri offers a unique opportunity to apply your skills to impactful projects. It's a rapidly growing, collaborative, and supportive environment where we analyse and investigate data, design solutions to exciting data-driven challenges, and make a real difference for our customers. We are passionate about continuous learning and fostering shared expertise within the team.
Data Engineers within our Data capability team are often working on:
- Analysing customer requirements in long-term projects and new bid work to uncover opportunities for customers to leverage their data.
- Engineering and automating resilient, scalable data platforms and pipelines using tools like Apache Spark, Apache NiFi, and Kubeflow.
- Working with a variety of datastores, including relational (SQL), NoSQL (Elasticsearch, MongoDB), and Graph Databases (Neo4j).
- Analysing and modelling complex customer data, performing statistical analyses, and designing cleansing, transformation, and normalisation processes.
- Deploying and managing ML/AI models and environments using frameworks such as TensorFlow and PyTorch.
- Writing and supporting high-quality software solutions in Python to implement data science models, tools, and techniques.
- Leveraging cloud platforms like AWS, Azure, and GCP to build and deploy robust data solutions.
About the role:
As a Data Engineer, you will help maintain our strong reputation for delivering robust solutions by taking a conscientious and scientific approach to customer data challenges. You will use your strong problem-solving skills to design and develop innovative techniques and tools in an agile manner.
Working collaboratively with other data engineers, data scientists, and developers, you will be responsible for building the foundational systems and connective tissue that make our data science work possible. A key part of our culture here at Naimuri is continuous improvement via mentoring and supporting earlier-career colleagues, helping to foster a culture of continuous learning and shared expertise across the team.
You will work closely with customers and internal teams to:
- Design, build, and maintain data ingestion and transformation pipelines.
- Investigate, transform (with provenance), and model customer data, performing data cleansing and feature engineering to prepare data for analysis. This may be with tools such as Apache NiFi, or libraries such as Pandas (Python).
- Work with data architects and platform engineers to design and implement secure, scalable data storage and processing solutions.
- Apply statistical methods to analyse customer data using libraries such as NumPy and SciPy.
- Identify opportunities to design and build algorithms to transform and interrogate data at scale.
- Collaborate with Data Scientists to productionise ML/AI models, ensuring they are efficient, scalable, and maintainable.
- Develop data visualisations and reporting tools for audiences of different technical abilities, using libraries like Matplotlib.
- Test and compare the effectiveness of different computational techniques and database technologies for working with data.
About you:
We're looking for someone who:
- Has experience of working with and is passionate about building robust, scalable systems to handle complex data.
- Takes a conscientious, curious, and scientific approach to their work.
- Continually learning about state-of-the-art techniques in technology, academic, and industry articles.
- Strong programming skills, particularly in Python.
- Hands-on experience with relational databases (e.g. SQL) and/or NoSQL or distributed database technology (e.g., Elasticsearch, MongoDB, Neo4j).
- A solid understanding of data modelling, data cleansing, and data engineering principles, and potentially other processes such as:
- Data quality monitoring.
- Performance monitoring and tuning.
- Change data capture/audit/generation and sync of derived data sets.
- Schema design and migrations.
- Strong analytical and problem-solving abilities.
- The ability to communicate complex technical ideas to diverse audiences.
Nice to haves:
- A degree in a field like Computer Science, Data Science, Engineering, Mathematics, or Physics (though we value demonstrable experience just as much!).
- Experience designing and developing data ingestion and transformation pipelines using tools like Apache Spark or cloud-native solutions in AWS, Azure, or GCP.
- Familiarity with the lifecycle of ML/AI models and experience with MLOps tools like Kubeflow/MLflow.
- Experience designing and running batch processing or streaming jobs.
- Experience with Graph Databases (e.g., Neo4j).
- Familiarity with data science and machine learning libraries (Scikit-learn, NLTK, spaCy).
- Experience creating Python-based applications and/or APIs (e.g. using Pydantic).
- Familiarity with data governance and lineage at both a conceptual and implementation level.
Location:
Our Head Office is based in Salford Quays, Manchester, with satellite teams currently in London and Gloucestershire. We offer hybrid working where you can work from home for part of your working week with time on site being based on the needs of your assigned delivery and agreed Ways of Working for your team. This would normally be a maximum of one or two days per week but you would be welcome to spend more days in the office if you preferred.
We would potentially be interested to see applications from people within commuting distance of Huntingdon, Cambridgeshire.
Pay and benefits:
Naimuri pays competitively within the industry based on your role's base location rates. The salary for this position is dependent upon your experience. We assess seniority relative to the team at Naimuri during the interviewing process.
A full time working week is 37.5 hours and you have flexibility over when you give that time. We also offer part-time working which can be discussed during the recruitment process.
Our core hours are 10:00am - 3:00pm and our office hours are between 7:30 and 18:00 Monday to Friday.
Benefits include:
- Flexible/Hybrid working options
- A company performance related bonus
- Pension matched 1.5x up to 10.5%
- AXA group 1 medical cover
- Personal training budget
- Holiday buy-back scheme
- A flexible benefits scheme
Recruitment Process:
We want to ensure that you feel comfortable and confident when interviewing with us. To help you prepare, our recruitment team will discuss the process in more detail with you when you apply.
We are happy to support any accessibility or neurodiversity requirements.