Create data packages, in the form of databases, reports, and visualization.
Communicate ongoing data science activities, technical findings, and data products for both technical and non-technical customers.
Extract relevant features from large data stores containing open source, PIA, and CAI, containing bad records, partial records, errors, or other forms of “noiseing.”
Extract features from open source information stored in a wide range of possible formats, including JSON, XML, raw text logs, industry-specific encodings, and graph link data.
Apply natural language processing, computer vision, signal processing, and speaker and speech recognition algorithms to identify objects in text, image, video, and audio files.
Apply descriptive and inferential statistics to describe data and make predictions about the data, including statistical tests to determine confidence for a hypothesis, common summary statistics (e.g. mean, variance, and counts), and fit distributions to datasets and use those distributions to predict event likelihoods.
Execute Data Science method using parallel computing frameworks (e.g. deeplearning4j, Torch, Tensor Flow, Caffe, Neon, NVIDIA CUDA Deep Neural Network Library (cuDNN), and OpenCV) and distributed data processing frameworks (e.g. Hadoop (including HDFS, Hbase, Hive, Impala, Giraph, Sqoop), Spark (including MLib, GraphX, SQL, and Dataframes).
Execute Data Science method using common programming/scripting languages: Python, Java, Scala, and R (statistics).
Apply your technical expertise to exploit available data using specialized methodologies and digital tools.
Support multiple simultaneous projects and take open-ended or high-level guidance, independently and collaboratively make discoveries that are mission-relevant, and package and deliver the findings to a non-technical audience.
Provide technical expertise that assists in developing system requirement documents and capability/feasibility assessments.
Lead projects to include planning, monitoring, status reporting, communication and delivery of items on time.
Collaborate with other tech teams to implement advanced analytics algorithms that exploit the discovery of rich datasets for statistical analysis, prediction, clustering and machine learning.
Perform advanced assessment, problem solving and analysis for data science challenges.
Design and develop algorithms to extract, transform, load and normalize large volumes of structured and unstructured, diverse sources including big data sources.
Coordinate research and analytic activities utilizing various data points (unstructured and structured) and employ programming techniques to develop relational models of the resolved data sources and large structured, semi-structured or unstructured datasets.
Develop algorithms to elicit contextual understanding of data.
Work with data management team on enhancements to existing pipelines.
Coordinates with Data Engineers to build data environments providing data identified by Data Analysts, Data Integrators, Knowledge Managers, and Intel Analysts.
Select Machine Learning/Artificial Intelligence or other models and continue to and coach model tuning and evaluation.
Coach team members in high-performing environment.
We are GDIT. The people supporting some of the most complex government, defense, and intelligence projects across the country. We deliver. Bringing the expertise needed to understand and advance critical missions. We transform. Shifting the ways clients invest in, integrate, and innovate technology solutions. We ensure today is safe and tomorrow is smarter. We are there. On the ground, beside our clients, in the lab, and everywhere in between. Offering the technology transformations, strategy, and mission services needed to get the job done.
GDIT is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status, or any other protected class.