Spark is a big data processing framework built in Scala that runs on the JVM. It provides speed, generality, ease of use, and accessibility for processing large datasets. Spark features include working directly on memory for speed, supporting MapReduce, lazy evaluation of queries for optimization, and APIs for Scala, R and Python. It includes Spark Streaming for real-time data, Spark SQL for SQL queries, and MLlib for machine learning. Resilient Distributed Datasets (RDDs) are Spark's fundamental data structure, and MapReduce is a programming model used for processing large amounts of data in parallel.