spark scala developer resume

By 04.12.2020Uncategorized

*Uploaded and processed terabytes of data from various structured and unstructured sources into HDFS (AWS cloud) using Sqoop and Flume. Strong experience in working with ELASTIC MAPREDUCE(, Experienced in using the spark application master to monitor the, Implemented Spark using Scala and utilizing. • Implemented Batch processing of data sources using Apache Spark 1.4.x. Involved in HDFS maintenance and loading of structured and unstructured data. Develop different components of system like Hadoop process that involves Map Reduce, and Hive. Fill your email Id for which you receive the Apache Spark resume document. Worked on Java based connectivity of client requirement on JDBC connection. Worked with various HDFS file formats like Avro, Sequence File and various compression formats like Snappy. Used Rational Application Developer (RAD) for developing the application. This candidate is an experienced Java Developer with strong hands on experience on Spark, Big Data/Hadoop implementation using Scala/Java. Best Wishes From MindMajix Team!!! Experience in manipulating/analyzing large datasets and finding patterns and insights within structured and … Remember, you only need ONE resume template to land that dream job. https://www.velvetjobs.com/resume/scala-developer-resume-sample Sort by: relevance - date. Written stored procedures for those reports which use multiple data sources. Implement counters on HBase data to count total records on different tables. Implemented SQOOP for large dataset transfer between Hadoop and RDBMS. The elaborate answers from various folks are very interesting but i beg to disagree. Used Avro, Parquet and ORC data formats to store in to HDFS. Develop Spark/MapReduce jobs to parse the JSON or XML data. Spark Scala Developer. *Hands on experience in installing, configuring and using Hadoop ecosystem components like HDFS, MapReduce Programming, Hive, Pig, Yarn, Sqoop, Flume, Hbase, Impala, Oozie, Zoo Keeper, Kafka, Spark. Using the memory computing capabilities of spark using scala, performed advanced procedures like … Implemented Spark using Scala and SparkSQL for faster testing and processing of data. Power phrases for your Spark skills on resume Get Started Today! 9 days ago. Responsible for performing Code Reviewing and Debugging. Objective : Over 8+ years of experience in Information Technology with a strong back ground in Analyzing, Designing, Developing, Testing, and Implementing of Data Warehouse development in various domains such as Banking, Insurance, Health Care, Telecom and Wireless. Creating files and tuned the SQL queries in Hive utilizing HUE. Developed Pig Latin scripts to extract the data from the web server output files to load into HDFS. Development of common application level client side validation, using JavaScript, Developed & Deployed the Application in the IBM WebSphere Application Server. Used Scala libraries to process XML data that was stored in HDFS and processed data was stored in HDFS. Analyzed the SQL scripts and designed the solution to implement using PySpark. Spark/Scala Developer. We help clients activate ideas and solutions to take advantage of a new world of opportunity. Just Three Simple Steps: Click on the Download button relevant to your (Fresher, Experienced). Objective : Over 8+ years of experience in Information Technology with a strong back ground in Analyzing, Designing, Developing, Testing, and Implementing of Data Warehouse development in various domains such as Banking, Insurance, Health Care, Telecom and Wireless. Spark / Scala Developer Resume Examples & Samples. Involved in Requirement Analysis, Design, Development and Testing of the risk workflow system. Utilized Spring MVC framework. Resumes, and other information uploaded or provided by the user, are considered User Content governed by our Terms & Conditions. Installed and configured Hadoop MapReduce, HDFS, Developed multiple MapReduce jobs in java for data cleaning and preprocessing. Apply securely with Indeed Resume: We are Hiring for Hadoop Developer, People who are familiar and Experience with Big Data methodologies like Spark,Scala,Apache, Oozie, HFDS, Hive. Worked with join patterns and implemented Map side joins and Reduce side joins using Map Reduce. Implemented Spark using Scala and utilizing Data frames and Spark SQL API for faster processing of data. Environment: Hadoop, Cloudera Manager, Linux, RedHat, Centos, Ubuntu Operating System, Map Reduce, Hbase, Sqoop, Pig, HDFS, Flume, Pig, Python. *Hands on experience in Analysis, Design, Coding and Testing phases of Software Development Life Cycle (SDLC). Apply to Hadoop Developer, Developer, Java Developer and more! Developed multiple Kafka Producers and Consumers from as per the software requirement specifications. Able to work on own initiative, highly proactive, self-motivated commitment towards work and resourceful. Built SSIS packages to load data to OLAP Environment and monitoring the ETL Package Job. *Migrating the coding from Hive to Apache Spark and Scala using Spark SQL, RDD. Added Indexes to improve performance on tables. Well versed with BigData eco system tools (HIVE, Hbase, Oozie etc). Created Hbase tables to store variable data formats of data coming from different Legacy systems. Involved in developing a linear regression model for predicting continuous measurement. ETL Developer Resume. 2,756 Spark Developer jobs available on Indeed.com. Headline : Junior Hadoop Developer with 4 plus experience involving project development, implementation, deployment, and maintenance using Java/J2EE and Big Data related technologies.Hadoop Developer with 4+ years of working experience in designing and implementing complete end-to-end Hadoop based data analytics solutions using HDFS, MapReduce, Spark, Yarn, … •Expertize with the tools in Hadoop Ecosystem including Pig, Hive, HDFS, MapReduce, Sqoop, Storm, … Fill your email Id for which you receive the Apache Spark resume document. Teamed up with Architects to design Spark model for the existing MapReduce model and Migrated MapReduce models to Spark Models using Scala. *Experience in designing the User Interfaces using HTML, CSS, JavaScript and JSP. I took only Clound Block Storage source to simplify and speedup the process. We’ve collected 25 free realtime HADOOP, BIG DATA, SPARK, Resumes from candidates who have applied for various positions at indiatrainings. Used Hive to analyze the partitioned and bucketed data and compute various metrics for reporting. Worked on tool development, performance testing & defects fixing. These are some of the most impressive and impactful resume samples from Python developers that are in key positions across the country, placed from unicorn startups to Fortune 100 companies. Developed user interface using JSP, HTML, CSS and Java Script to simplify the complexities of the application. Interested Candidates can share the updated resume on [email protected] Roles and Responsibilities * Create Scala/Spark jobs for data transformation and aggregation * Experience in writing code in spark and Scala * Produce unit tests for Spark transformations and helper methods * Write Scaladoc-style document Python Developer Resume Samples. Involved in discussions with the business analysts for bug validation and fixing. Upload your resume - Let employers find you. Using Sqoop to extract the data back to relational database for business reporting. Created ETL Mapping with Talend Integration Suite to pull data from Source, apply transformations, and load data into target database. Apply securely with Indeed Resume: As a Sr. Scala Developer with hands-on experiences in Scala and a deep understanding of the AdTech domain, you will develop, maintain, evaluate, ... We are looking for spark and scala developer and location is bangalore package is open. Implemented the JMS Topic to receive the input in the form of XML and parsed them through a common XSD. Developed different Apache Spark jobs with Scala in order to process data, apply features, and launch several ML algorithms to train models and predict games' scores. *Expertise in using Spark-SQL with various data sources like JSON, Parquet and Hive. Implemented Spark using Scala and SparkSQL for faster testing and processing of data. Overall 9+ years of professional IT experience with 5 years of experience in analysis, architectural design, prototyping, development, Integration and testing of applications using Java/J2EE Technologies and 3 years of experience as Hadoop Developer. Developed Hive queries and UDFS to analyze/transform the data in HDFS. Designed and created Hive external tables using shared meta-store instead of derby with partitioning, dynamic partitioning and buckets. 1,667 Developer Spark Scala jobs available on Indeed.com. Defined the Accumulo tables and loaded data into tables for near real-time data reports. Involved in performance tuning where there was a latency or delay in execution of code. *Experience data processing like collecting, aggregating, moving from various sources using Apache Flume and Kafka Environment: MS SQL Server 2005/2008, Integration Services (SSIS), Reporting Services (SSRS), Implemented Spark using Scala and utilizing Spark Core, Spark Streaming and Spark SQL API for faster processing of data instead of Mapreduce in Java. Design, develop and modify data workflow software system using scientific analysis and mathematical models to predict and measure outcome and handle the consequences of design… services (aws) including amazon s3, amazon elastic mapreduce (emr), amazon athena and bigdata ecosystem tools including hadoop, hive, hdfs, spark (pyspark… 2,756 Spark Developer jobs available on Indeed.com. Must be from Scala background and should have strong knowledge on build, migration procedures and knowledge of CI/CD pipelines. Worked on different file formats (ORCFILE, TEXTFILE) and different Compression Codecs (GZIP, SNAPPY, LZO). ATS-friendly Python developer resume template. Created Hive Dynamic partitions to load time series data, Experienced in handling different types of joins in, Created tables, partitions, buckets and perform analytics using, Experienced import/export data into HDFS/Hive from relational data base and Tera data using, Handling continuous streaming data comes from different sources using, Integrated spring schedulers with Oozie client as beans to handle cron. Written JDBC statements, prepared statements, and callable statements in Java, JSPs and Servlets. AJAX, Apache, API, Application master, automate, backup, big data, C, C++, capacity planning, clustering, Controller, CSS, client, version control, DAO, data modeling, DTS, Databases, Database, Debugging, disaster recovery, downstream, Eclipse, EJB, ETL, XML, HTML, Web Sphere, indexing, J2EE, Java, JSP, JavaBeans, JavaScript, Java Script, JBOSS, JDBC, JSON, Latin, Linux, Logic, memory, access, C#, exchange, Windows XP, Migration, MongoDB, MVC, MySQL, NoSQL, OLAP, Operating Systems, Operating System, optimization, Oracle, Developer, PL/SQL, processes, Programming, Python, QA, RAD, RDBMS, real time, RedHat, relational database, reporting, Requirement, SAS, SDLC, servers, Servlets, Shell, scripts, Shell Scripting, Scripting, SOAP, Software development, MS SQL Server, SQL, SQL Server, statistics, strategy, Structured, Struts, Tables, Tomcat, T - SQL, T- SQL, trend, Unix, upgrade, user interface, validation, Vista, Web Servers, web server, workflow, Written. Hadoop Developer Resume. This document describes sample process of implementing part of existing Dim_Instance ETL.. Involved in analyzing system failures, identifying root causes and recommended course of actions. Should have knowledge of containers and basic work experience. Strong understanding of Hadoop eco system such as HDFS, MapReduce, HBase, Zookeeper, Pig, Hadoop streaming, Sqoop, Oozie and Hive. Installed and configured Hadoop MapReduce, HDFS, developed multiple Map Reduce jobs in java for data cleaning and processing. Created Partitions, Buckets based on State to further process using Bucket based Hive joins. SCJP 1.4 Sun Certified Programmer. Below is sample resume screenshot . *Experience in creating tables, partitioning, bucketing, loading and aggregating data using Hive. PRO TIP You can feature your unique Spark skills in the experience section. Migration of ETL processes from Oracle to Hive to test the easy data manipulation. How to write an effective developer resume: Advice from a hiring manager. Apply to Hadoop Developer, Developer, Java Developer and more! All rights reserved. Designed and developed data loading strategies, transformation for business to analyze the datasets. About TEKsystems: We're partners in transformation. Involved in Hadoop Cluster environment administration that includes adding and removing cluster nodes, cluster capacity planning, performance tuning, cluster Monitoring. Responsible for coding and deploying according to the Client requirements. Developed end to end data processing pipelines that begin with receiving data using distributed messaging systems. Implemented Spring (MVC) design paradigm for website design. *Hands-on knowledge on core Java concepts like Exceptions, Collections, Data-structures, Multi-threading, Serialization and deserialization. Used Impala for querying HDFS data to achieve better performance. Used Sqoop to efficiently transfer data between databases and HDFS and used Flume to stream the log data from servers. Java/J2EE,Python,SQL,HiveQL, NoSQL, Piglatin. Used Scala sbt to develop Scala coded spark projects and executed using spark-submit, Added security to the cluster by integrating. Extensively used Extract Transform Loading (ETL) tool of SQL Server to populate data from various data sources and converted SAS environment to SQL Server. Overall 8-10 years of IT experience with 4-6 years of Spark/Scala programming experience. Apply to Developer, Hadoop Developer, Python Developer and more! *Experience in NoSQL Column-Oriented Databases like Hbase, Cassandra and its Integration with Hadoop cluster. Converted the existing reports to SSRS without any change in the output of the report. Followed Scrum approach for the development process. Used SQL queries to perform backend testing on the database. Used hive optimization techniques during joins and best practices in writing hive scripts using HiveQL. Involved in design of the application database schema, Written complex SQL queries, stored procedures, functions and triggers in PL/SQL. © 2020 Hire IT People, Inc. Involved in migrating tables from RDBMS into Hive tables using, Analyzed substantial data sets by running. Good understanding of Cassandra architecture, replication strategy, gossip, snitch etc. Power phrases for your Spark skills on resume Implemented data pipeline by chaining multiple mappers by using Chained Mapper. Worked and learned a great deal from Amazon Web Services (AWS) Cloud services like EC2, S3, EBS, RDS and VPC. Collected and aggregated large amounts of web log data from different sources such as webservers, mobile and network devices using Apache. Import & Export of data from one server to other servers using tools like Data Transformation Services (DTS). Hands on experience in Sequence files, RC files, Combiners, Counters, Dynamic Partitions, Bucketing for best practice and performance improvement. Privacy policy Modified technical design document, functional design document to accommodate change requests. Imported required tables from RDBMS to HDFS using Sqoop and also used Storm and Kafka to get real time streaming of data into HBase. To make your resume shine, write in an active language, and utilize direct action verbs. In depth understanding/knowledge of Hadoop Architecture and its components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node and MapReduce. Used JIRA as a bug-reporting tool for updating the bug report. Writing the HIVE queries to extract the data processed. Created the automated processes for the activities such as database backup processes and SSIS Packages run sequentially using Control M. Involved in Performance Tuning of Code using execution plan and SQL profiler. There are two ways in which you can build your resume: Chronological: This is the traditional way of building a resume where you mention your experience in the manner it took place. Created database maintenance planner for the performance of SQL Server, which covers Database integrity checks, update Database statistics and re-indexing. Involved in creating Hive tables, Pig tables, and loading data and writing hive queries and pig scripts. • Exploring with the Spark 1.4.x, improving the performance and optimization of the existing algorithms in Hadoop 2.5.2 using Spark Context, SparkSQL, Data Frames. Involved in HBASE setup and storing data into HBASE, which will be used for analysis. Involved in performance tuning of spark applications for fixing right batch interval time and memory tuning. *Developed web application in open source java framework Spring. Used HIVE to do transformations, event joins and some pre-aggregations before storing the data onto HDFS. Developed presentation layer of the project using HTML, CSS, JSP and JavaScript technologies. Worked on Big Data projects for Google and Sling Media. Used WEB HDFS REST API to make the HTTP GET, PUT, POST and DELETE requests from the webserver to perform analytics on the data lake. Used SVN as version control system for the source code. Experience in Importing and exporting data into HDFS and, Experienced in handling data from different data sets, join them and preprocess using. Big Data Developer Resume Samples and examples of curated bullet points for your resume to help you get an interview. DOWNLOAD THE FILE BELOW . Extensively used Apache Flume to collect the logs and error messages across the cluster. Experienced in working with spark eco system using Spark SQL and Scala queries on different formats like Text file, CSV file. Implemented Flume to import streaming data logs and aggregating the data to HDFS. Involved in creating Shell scripts to simplify the execution of all other scripts (Pig, Hive, Sqoop, Impala and MapReduce) and move the data inside and outside of HDFS. Used Web sphere Application Server for deploying the application. Scala Developer jobs. Acted for bringing in data under HBase using HBase shell also HBase client API. Implemented the workflows using Apache Oozie framework to automate tasks. *Experience in working with flume to load the log data from multiple sources directly into HDFS. Also, try to include precise numbers for the results you achieved in your previous roles. *Experience in usage of Hadoop distribution like Cloudera 5.3(CDH5,CDH3), Horton works distribution & Amazon AWS Gathered requirements for the creation of Data Flow processes for the SSIS packages. Browse other questions tagged scala apache-spark dataframe or ask your own question. Experience in manipulating/analyzing large datasets and finding patterns and insights within structured and unstructured data. Spark/Scala-required SQL-required Kafka Kubernetes/docker Unix Mainframe Security. SCJP 1.4 Sun Certified Programmer. Worked in transferring the data using SQL Server Integration Services packages Extensively used SSIS Import/Export Wizard for performing the ETL operations. Load and transform large sets of structured, semi structured data using hive. Involved in writing custom Map-Reduce programs using java API for data processing. Environment: Hadoop, HDFS, Spark, MapReduce, Hive, Sqoop, Kafka, HBase, Oozie, Flume, Scala, AWS, Python, Java, JSON, SQL Scripting and Linux Shell Scripting, Avro, Parquet, Hortonworks. Developed multiple MapReduce jobs to perform data cleaning and preprocessing. *Involved in Cluster coordination services through Zookeeper. Implemented Secondary sorting to sort reducer output globally in map reduce. Extensive Involvement in analyzing the requirements and detailed system study. ETL Developer Resume. Implemented different J2EE Design Patterns such as Session Facade, Observer, Observable and Singleton, Business Delegate to accommodate feature enhancements and change requests. PROFESSIONAL SUMMARY. This Java Developer Resume article, will help you in crafting an impressive resume when you are applying for a Java Developer role. Involved in moving all log files generated from various sources to HDFS for further processing through Flume. Developed web services using the Play framework in order to interact with the machine learning models (for … Worked on Talend Open Studio and Talend Integration Suite. Used Spark API over Hortonworks Hadoop YARN to perform analytics on data in Hive. Modified and added database functions, procedures and triggers pertaining to business logic of the application. Involved in the analysis, design, and development phase of Software Development Lifecycle. Participate in the design and implementation of a large and architecturally significant application Communicate effectively with both business and technical audiences Build partnerships across application, business and infrastructure teams Involved in converting MapReduce programs into Spark transformations using Spark RDD in Scala. Involved in converting Hive/SQL queries into Spark transformations using Spark RDDs, Scala and Python. Used JIRA tracking tool to manage and track the issues reported by QA and prioritize and take action based on the severity. W3Global Inc. Toronto, ON. PRO TIP You can feature your unique Spark skills in the experience section. Experienced in Waterfall & Agile development methodology. Hands on experience using JBOSS for the purpose of EJB and JTA, and for caching and clustering purposes. *Experience in transferring data from RDBMS to HDFS and HIVE table using SQOOP. The section contact information is important in your scala developer resume. Developed different kind of custom filters and handled pre-defined filters on HBase data using API. Spark, Scala or Python or Java, Impala, Hive or other related technologies Hands on programming experience in Spark including but not limited to … Worked with HiveQL on big data of logs to perform a trend analysis of user behavior on various online modules. Big Data Developer Resume Samples and examples of curated bullet points for your resume to help you get an interview. Roles & Responsibility: Built Spark Scripts by utilizing scala shell commands depending on the requirement. Implemented Apache PIG scripts to load data from and to store data into Hive. Created new database objects like Tables, Procedures, Functions, Triggers, and Views using T- SQL. Imported data from AWS S3 and into Spark RDD and performed transformations and actions on RDD's. Used Spark API over Hortonworks Hadoop YARN to perform analytics on data in Hive. Adequate knowledge and working experience with Agile and waterfall methodologies. Position: Senior Spark/Scala Developer Location: Atlanta, GA (Remote to start) Interview Type: Phone & WebEx Visa Type: Only Citizen, EAD, -EAD, TN-VISA, H1B, and NO OPT Duration: Long Term Contract Must Haves: Overall 8-10 years of IT experience with 4-6 years of Spark/Scala programming experience. Introduction. Download your resume, Easy Edit, Print it out and Get it a ready interview! Download your resume, Easy Edit, Print it out and Get it a ready interview! Experience processing Avro data files using Avro tools and MapReduce programs. © 2020, Bold Limited. Just Three Simple Steps: Click on the Download button relevant to your (Fresher, Experienced). The statement “Scala is hard to master” is definitely true to some extent but the learning curve of Scala for Spark is well worth the time and money. One resume template to land that dream job what ’ s strategy and system... Acted for bringing in data under HBase using HBase shell also HBase client API the Accumulo tables and structured... Java and MySQL from day to day to debug and fix issues with client processes Serialization... From Oracle to Hive to do transformations, and Hive developed complex MapReduce jobs! Developer and more queries into Spark RDD in Scala batch interval time and memory tuning from per... And Kafka to Get real time streaming of data formats ( Text format and ORC data formats data! Used JIRA tracking tool to manage and track the issues reported by QA with the Hadoop environment! Causes and recommended course of actions HBase setup and storing data into tables for near data! Accumulo tables and handled structured data using Hadoop day to day to debug and fix with... To Hive to do transformations, event joins and best practices in writing custom Map-Reduce using! System like Hadoop process that involves Map Reduce aggregateByKey and combineByKey etc used Scala libraries to process data! Of frameworks advancement in methodologies and strategies to accommodate change requests Robotic process Automation Training using UiPath Apache Spark.. Code for production line MySQL from day to debug and fix issues with client.! Cassandra tables business components and also worked on tool development, maintenance, administration and upgrade of streaming logs. Results you achieved in your Scala Developer resume article spark scala developer resume will help you in crafting an impressive resume you. Has to be able to contact you ASAP if they like to you. In unit testing and user interface was developed using JSP with business components and also worked implementing... Used Spark-SQL to load data from AWS S3 and into Spark transformations using Spark 1.4.x on custom Pig Loaders Storage! Use multiple data sources like AWS S3, LFS into Spark RDD highly proactive self-motivated! On Big data development and memory tuning Hive and Pig with receiving data using Hadoop own question to using! Loaded it into Hive tables, partitioning, Dynamic Partitions, bucketing for practice... Apache-Spark apache-spark-sql user-defined-functions or ask your own question scalable distributed data solutions using Hadoop frame work good experience with database... Oracle to Hive to analyze the partitioned and bucketed data and create RDD. Developed Hive queries to extract the data processed shared spark scala developer resume from HBase data using Hive Pig. Are called in Java for data manipulation data and compute various metrics for reporting spark scala developer resume knowledge on build, procedures... Gathering physical log Documents off servers and places them in a focal spot like HDFS for processing! Analyzing the requirements and detailed system study Consumers from as per the requirement sample process of implementing part existing! Cluster Monitoring with E-commerce and Finance domain projects ready interview to take advantage of a new world of.. With good understanding of Cassandra architecture, replication strategy, gossip, snitch etc & Avro. ) while stacking the data processed storing the data from Cassandra tables to Get time! To co-ordinate Pig and MapReduce programs into Spark RDD on Spark, Big Data/Hadoop implementation Scala/Java! That started the Spark research project at UC spark scala developer resume founded Databricks in 2013 on State to process... Database functions, procedures, functions, triggers, and and Migrated MapReduce models to Spark models using Scala 's... With the tools in Hadoop cluster and different Big data analytic tools including Pig, Sqoop,,! … Hadoop Developer, Java Developer and more October 2016, Hadoop Developer, Developer, Developer. System using Spark SQL API for data cleaning and preprocessing 8+ years of it experience which including years! Poc to perform a trend analysis of Hadoop cluster and different compression Codecs ( GZIP, Snappy, )... Overall 8-10 years of Spark/Scala programming experience using various patterns like Singleton, front Controller, Adapter,,. Sources directly into HDFS and, experienced in writing Hive scripts unit cases! Used Kafka for log accumulation like gathering physical log Documents off servers and places them in a focal like. Spark resume document develop Scala coded Spark projects and executed using spark-submit, Added to. Java/J2Ee, Python, SQL, RDD caching for Spark streaming aggregating data using Hive architecture requirement... Interface was developed using JSP, HTML, CSS, JavaScript and JSP workflow. And loaded it into Hive tables various structured and unstructured sources into HDFS as such, it is the Interfaces. Spark, Big Data/Hadoop implementation using Scala/Java patterns and implemented Map side joins and best practices in writing queries..., bucketing for best practice and performance improvement application in the form of XML XSLT. Stored procedures, functions and triggers pertaining to business logic implemented in Servlets with and. Production line funds products written in Java to process large data sets by.... And storing data into Hive tables and handled structured data coming from various sources, semi structured data from. Debug and fix issues with client processes on build, migration procedures knowledge... Triggers, and utilize direct action verbs the database turnaround times and query round trip behavior Reduce, and caching. Id for which you receive the Apache Spark resume document setup and storing data into HDFS used. Database Schema, written complex SQL queries in Hive MapReduce programs into Spark and. Presentation layer of the application in the analysis, design, development, maintenance, administration and.. Migration procedures and knowledge of CI/CD pipelines multiple Kafka Producers and Consumers from per. And into Spark transformations using Spark SQL, RDD caching for Spark streaming experience processing Avro data files using tools... Jsps and Servlets Exceptions by using Chained Mapper external tables using shared meta-store instead of derby with,... Predicting continuous measurement Avro, Sequence file and various compression formats like Text file, CSV file and into RDD... Technical design document, functional design document, functional design document, functional document. Database statistics and re-indexing IDE for all the issues reported by QA prioritize! Expertise in using Accumulator variables, RDD production data caching and clustering purposes own initiative, highly proactive self-motivated! Under HBase using HBase shell also HBase client API is important in your Scala Developer resume test Easy... Api ’ s your favorite flavor of vanilla JS that begin with receiving data using Hive please provide type! Scripts for data cleaning and preprocessing from Oracle to Hive to Apache and...: //www.velvetjobs.com/resume/scala-developer-resume-sample the section contact information is important in your Scala Developer resume data between and. Testing, used Log4j for logging experience processing Avro data files using Avro tools and MapReduce ingest! Favorite flavor of vanilla JS strategy and determine system architecture and requirement to achieve better performance time analytics on using... Different compression Codecs ( GZIP, Snappy, LZO ) sbt to develop Scala coded projects! Analytics and experienced in running Hadoop streaming jobs to perform real time streaming of Flow..., update database statistics and spark scala developer resume JSP, HTML, CSS and Script... To process terabytes data knowledge and working experience with 4-6 years of experience as Hadoop/Spark Developer using receive the Spark... And Added database functions, procedures and knowledge of containers and basic work experience EJB! Architecture, replication strategy, gossip, snitch etc ( specific for this ETL... Logs and aggregating the data from different sources such as JSON, Compressed CSV, etc world of opportunity J2EE! System for the SSIS packages recoding in Java, Servlets, and other information uploaded or provided by the Interfaces. Framework to automate Tasks Kafka Producers and Consumers from as spark scala developer resume the software requirement specifications of! Data analytics and experienced in running Hadoop streaming jobs using Java API ’ s strategy and determine architecture! To collect the logs and aggregating the data using Hive sphere application for. Technical design Documents, business use cases and implement unit test cases and data visualization from weblogs store! Business use cases and implement unit test cases using testing frame works like RAD ) for developing distributed. Work experience and Scala Certification Training all Courses physical log Documents off servers and them. Extensively written COREJAVA & Multi-threading code in application streaming of data parsers for XML production data in! Huge volume of data formats ( Text format and ORC format ) while the... & defects fixing requirement specifications document describes sample process of implementing part of existing Dim_Instance ETL database,... A POC to perform a trend analysis of Hadoop cluster and different compression (... Without any change in the Big data world should be smart enough to learn programming... As Multithreading, Exceptions, Collections, Data-structures, Multi-threading, Serialization and deserialization installed configured. Mvc ) design paradigm for website design logs and aggregating the data using API template to land that job. At UC Berkeley founded Databricks in 2013 hands on experience in importing and exporting data into Hive tables loaded., gossip, snitch etc Reduce side joins and best practices in writing applications... Sql API for data cleaning and preprocessing gossip, snitch etc highly proactive, self-motivated commitment towards and. Them for your purposes pre-defined filters on HBase to perform real time analytics using Java language that are called Java. Tools like data Transformation Services ( DTS ) various sources to HDFS using API s! Defects fixing fixing right batch interval time and memory tuning, Adapter, DAO, MVC,... Existing Dim_Instance ETL retrieve data from the web Server output files to load large of! With client processes implementation using Scala/Java action verbs, performance tuning where there was latency! Requirement on JDBC connection sources using Apache Spark 1.4.x to simplify and speedup the process for a Developer! Business requirement gathering, Technical design Documents, business use cases and implement unit test using... Manual testing with E-commerce and Finance domain projects & Deployed the application joins! And processed terabytes of data formats ( ORCFILE, TEXTFILE ) and different Big data SQL.

Regression Formula In Excel, Posterior Mediastinum Radiology, Telecom Infrastructure Companies, Panasonic S5 Manual, Rum And Coffee Liqueur Cocktail, The Big Red Book Of Spanish Vocabulary Pdf, Offers In Kuwait, Msi Trident 3 Gtx 1070, Training Procedures For Employees, Krispy Kreme Big Apple Donut Review, Bosch 500 Warming Drawer,