SlideShare una empresa de Scribd logo
1 de 40
Descargar para leer sin conexión
DISCLOSED

Managing large datasets in R
– ff examples and concepts
Jens Oehlschlägel

Vienna
January 2010

This report contains public intellectual property. It may be used, circulated, quoted, or reproduced for distribution as a
whole. Partial citations require a reference to the author and to the whole document and must not be put into a context
which changes the original meaning. Even if you are not the intended recipient of this report, you are authorized and
encouraged to read it and to act on it. Please note that you read this text on your own risk. It is your responsibility to draw
appropriate conclusions. The author may neither be held responsible for any mistakes the text might contain nor for any
actions that other people carry out after reading this text.
Summary

The statistical interpreter R is hungry for RAM
and therefore limited to dataset sizes much
smaller than available RAM. R packages 'bit'
and 'ff' provide the basic infrastructure to
handle large data problems in R. In this
session we give an introduction into 'bit' and 'ff'
– interweaving working examples with short
explanation of the most important concepts.

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

1
An open-source vision for R

give free access to excellent
statistical software to everyone …
… for processing large datasets
even with standard hardware

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

2
Design goals: base packages for large data objects in R

large data

• many objects (sum(sizes) > RAM and …)
• large objects (size > RAM and virtual address space
limitations)
"the cloud"

standard HW

• limited RAM (or enjoy speedups)
• single disk (or enjoy RAID)
• single processor (or shared processing)

minimal RAM

• required RAM << maximum RAM
• be able to process large data in background

maximum
performance

• close to in-RAM performance if size < RAM (system cache)
• still able to process if size > RAM
• avoid costs of redundant access to disk (time, electricity, CO2)

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

3
Authors
Daniel Adler

dadler@uni-goettingen.de
package rgl
package ff since 1.0
dyncall project: dynamic C interface
in work: dynamic compilation and streaming for R

Jens Oehlschlägel Jens_Oehlschlaegel@truecluster.com
package bit since 1.0
package ff since 2.0
prototype R.ff

Development

http://r-forge.r-project.org/projects/ff/
Production

http://cran.r-project.org/web/packages/ff

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

4
Statistical methods that can be scaled to large data problems
using the infrastructure provided by packages bit and ff
Basic infrastructure for large objects
Basic infrastructure for chunking
Reading and writing csv files

packages bit and ff
packages bit and ff
(chunked sequential access with package ff)

Data transformation
(chunked and partially parallelized with package R.ff)
Math and probability distributions
(chunked and parallelized with package R.ff)
Data filtering
(chunked and parallelized, supported by bit and ff)
Descriptive statistics on chunks
(chunked and parallelized, supported by ff)
Biglm and bigglm
(chunked fitting with package biglm)
Bootstrapping
(chunked and parallelized random access)
Bagged predictive modelling
(chunked and parallelized random access)
Bagged clustering
(chunked and parallelized random access with truecluster)
Likelihood maximization
(chunked and parallelized sequential access)
EM-algorithm
(chunked and partially parallelized)
Some linear algebra
(chunked processing with package R.ff)
• matrix transpose
• matrix multiplication
new today:
• matrix inversion
ffsave / ffload
• svd for many rows and few columns
Saving and loading ff archives

(incremental save and selective load with package ff)

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

5
Infrastructure in bit and ff addresses a complexity of performance-critical
topics
Complexities in scope of ff and bit
• virtual objects by reference
• disk based objects
• memory efficient data types
• memory efficient subscript types
• fast chunk access methods
• fast index (pre)processing
• chunk processing infrastructure
• large csv import/export
• large data management
Complexities partially in scope
• parallel processing
Complexities in scope of R.ff
• basic processing of large objects
• some linear algebra for large objects

avoid copying and RAM duplication
manage temporary and permanent objects on disk
minimize storage requirements of data
minimize storage requirements of subscripts
allow fast (random) access to disk – for chunks of data
minimize or avoid cost of subscript preparation
foundation for efficient chunked processing
interface large datasets
conveniently manage all files behind ff

parallel access to large datasets (without locking)

elementwise operations and more
t, vt, matmul, matinv, few column svd

Currently not in scope
• full linear algebra for large objects

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

observe package bigmemory

6
Basic memory organisation
R

choice of fs
write cache

SSD
?
RAID

?

filesystem
cache

Hard Disk

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

7
Architecture behind packages ff and bit implements several performance
optimizations
R frontend

C interface

C++ backend

Hybrid Index
Preprocessing ...

… fast
access methods …

… memory mapped
compressed pages

• HIP

• C-code accelerating

• Tunable pagesize and

•

•

•

– parsing of index
expressions instead of
memory consuming
evaluation
– ordering of access
positions and re-ordering
of returned values
– rapid rle packing of
indices if and only if rle
representation uses less
memory compared to raw
storage
Hybrid copying semantics
– virtual dim/dimorder()
– virtual windows vw()
– virtual transpose vt()
New performance generics
– clone(), update(),
swap(), add(),
chunk(), bigsample()
Efficient coercions

•

•

is.unsorted() and rle() for
integers: intisasc(),
intisdesc(),
intrle()
C-code for looping over
hybrid index can handle
mixed raw and rle packed
indices in arrays and
avoids multiplication
C-code for looping over
bit: outer loop fixes word
in processor cache, inner
loop handles bits

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

system caching=
c("mmnoflush",
"mmeachflush")
• Custom datatype bitlevel en/decoding,
‚add‘ arithmetics and NA
handling
• Ported to Windows, Mac
OS, Linux and BSDs
• Large File Support
(>2GB) on Linux
• Paged shared memory
allows parallel
processing
• Fast creation and
modification of large
files on sparse
filesystems

8
Basic example: creating atomic vectors
# R example
ri <- integer(10)

# ff examples
library(ff)
fi <- ff(vmode="integer", length=10)
fb <- ff(vmode="byte", length=10)
rb <- byte(10) # in R this is integer
fb <- ff(rb)
vmode(ri)
vmode(fi)
vmode(rb)
vmode(fb)
cbind(.rambytes, .ffbytes)[c("integer","byte"),]
?vmode

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

9
Atomic data types supported by ff

implemented
not implemented

vmode(x)
boolean

1 bit logical

no NA

logical

2 bit logical

with NA

quad

2 bit unsigned integer

no NA

nibble

4 bit unsigned integer

no NA

byte

8 bit signed integer

ubyte

8 bit unsigned integer

short

16 bit signed integer

ushort

16 bit unsigned integer

integer

32 bit signed integer

single

no NA

32 bit float

double

with NA

64 bit float

complex
raw
character

with NA
no NA
with NA

2x64 bit float
8 bit unsigned char
fixed widths, tbd.

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

Compounds
factor
ordered
custom compounds
•Date
•POSIXct
•POSIXlt (not atomic)
10
Advanced example: creating atomic vectors
# R example
rf <- factor(levels= c("A","T","G","C"))
length(rf) <- 10
rf
# ff examples
frf <- ff(rf)
length(frf) <- 1e8
frf
frf[11:1e8] <- NA
ff(vmode="quad", length=1e8, levels=c("A","T","G","C"))
ff(vmode="quad", length=10
, levels=c("A","B","C","D"), ordered=TRUE)
ff(Sys.Date()+0:9, length=10)
ff(Sys.time()+0:9, length=10)
ff(0:9, ramclass="Date")
ff(0:9, ramclass=c("POSIXt", "POSIXct"))
str(ff(as.POSIXct(as.POSIXlt(Sys.time(), "GMT")), length=12))

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

11
Limiting R's RAM consumption through chunked processing
# ff example
> str(chunk(fd))
List of 50
$ :Class 'ri' int [1:3] 1 2000000 100000000
$ :Class 'ri' int [1:3] 2000001 4000000 100000000
$ :Class 'ri' int [1:3] 4000001 6000000 100000000
[snipped]
# simple as seq but balancing chunk size
> args(chunk.default)
function (from = NULL, to = NULL, by = NULL, length.out = NULL
, along.with = NULL, overlap = 0L, method = c("bbatch", "seq"), ...)
# automatic calculation of chunk size
> args(chunk.ff_vector)
function (x
, RECORDBYTES = .rambytes[vmode(x)]
, BATCHBYTES = getOption("ffbatchbytes")
, ...)
# by default limited at 1% of available RAM
> getOption("ffbatchbytes") / 1024^2 / memory.limit()
[1] 0.01
Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

12
Basic example: working with atomic vectors
# R example
rd <- double(100)
rd[] <- runif(100) # write
rd[] # this is the proper non-lazy way to read

# ff example
fd <- ff(vmode="double", length=1e8)
system.time(
for (i in chunk(fd)) fd[i] <- runif(sum(i))
)
system.time(
s <- lapply( chunk(fd)
, function(i)quantile(fd[i], c(0.05, 0.95)) )
)
crbind(s)

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

13
Negligible RAM duplication for parallel execution on ff objects
R

choice of fs
write cache

SSD
?
RAID

?

R

R

R

R

filesystem
cache

Hard Disk

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

14
Advanced example: working parallel with atomic vectors
library(snowfall)
finalizer(fd)
# let slaves not delete fd on shutdown
finalizer(fd) <- "close"
sfInit(parallel=TRUE, cpus=2, type="SOCK")
sfLibrary(ff)
sfExport("fd") # do not export the same ff multiple times
sfClusterEval(open(fd)) # explicitely opening avoids a gc problem
system.time(
sfLapply( chunk(fd), function(i){
fd[i] <- runif(sum(i))
invisible()
})
)
system.time(
s <- sfLapply( chunk(fd)
, function(i) quantile(fd[i], c(0.05, 0.95)) )
)
sfClusterEval(close(fd)) # for completeness
csummary(s)
sfStop()
Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

15
Supported index expressions

implemeneted
not implemented

x <- ff(1:12, dim=c(3,4), dimnames=list(letters[1:3], NULL))
i <- as.bit(c(TRUE, FALSE, FALSE))
expression

Example

(which) positive integers

x[ 1, 1]

negative integers

x[ -(2:12) ]

logical

x[ c(TRUE, FALSE, FALSE), 1]

character

x[ "a", 1]

integer matrices

x[ rbind(c(1,1)) ]

bit

x[ i & i, 1]

bitwhich

x[ as.bitwhich(i), 1]

range index

x[ ri(1,1), 1]

hybrid index

ff's internal index

x[ as.hi(1) ,1]

zeros

x[ 0 ]

NAs

x[ NA ]

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

16
Basic example: working with bit filters
# R example
l <- rd > 0.99
rd[l]
1e8 * .rambytes["logical"] / (1024^2) # 381 MB for logical
# ff example
b1 <- b2 <- bit(length(fd))
system.time( b1[] <- c(FALSE, TRUE) )
system.time( for (i in chunk(fd)) b2[i] <- fd[i] > 0.99 )
system.time( b <- b1 & b2 )
object.size(b) / (1024^2)
system.time( x <- fd[b] )
x[1:10]
sum(b) / length(b)

# less dense than 1/32

w <- as.bitwhich(b)
sum(w) / length(w)
object.size(w) / (1024^2)
system.time( x <- fd[w] )
x[1:10]
Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

17
Advanced example: working with hybrid indexing
# ff example
hp <- as.hi(b)
# ignores pack=FALSE
object.size(hp) / (1024^2)
system.time( x <- fd[hp] )
x[1:10]
hu <- as.hi(w, pack=FALSE)
object.size(hu) / (1024^2)
system.time( x <- fd[hu] )
x[1:10]
# Don't do
as.hi(1:1e8)
# Do
as.hi(quote(1:1e8))
hi(1, 1e8)
ri(1, 1e8)
chunk(1, 1e8, by=1e8)

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

18
Supported data structures

soon on CRAN
prototype available
not yet implemented

example

class(x)

vector ff(1:12)

c("ff_vector","ff")

array ff(1:12, dim=c(2,2,3))
matrix ff(1:12, dim=c(3,4))

c("ff_array","ff")
c("ff_matrix","ff_array","ff")

data.frame ffdf(sex=a, age=b)

c("ffdf","ff")

symmetric matrix ff(1:6, dim=c(3,3)
with free diag , symm=TRUE, fixdiag=NULL)
symmetric matrix ff(1:3, dim=c(3,3)
with fixed diag , symm=TRUE, fixdiag=0)
distance matrix

c("ff_dist","ff_symm","ff")

mixed type arrays
instead of
data.frames

c("ff_mixed", "ff")

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

19
Basic example: working with arrays (and matrices)
# R example: physically stored by column
array(1:12, dim=c(3,4))
# read by column
matrix(1:12, 3,4, byrow=TRUE) # read by row
# ff example: physically stored by column – like columnar OLAP
ff(1:12, dim=c(3,4))
# read by column
ff(1:12, dim=c(3,4), bydim=c(2,1))
# read by row
# ff example: physically stored by row – like OLTP database
ff(1:12, dim=c(3,4), dimorder=c(2,1))
# read by column
ff(1:12, dim=c(3,4), dimorder=c(2,1), bydim=c(2,1)) # read by row
fm <- ff(1:12, dim=c(3,4), dimorder=c(2,1))
get.ff(fm, 1:12) # note the physical order
fm[1:12]
# [. exhibits standard R behaviour
ncol(fm) <- 1e8 # not possible with this dimorder
nrow(fm) <- 1e8 # possible with this dimorder
fm <- ff(vmode="double", dim=c(1e4, 1e4))
system.time( fm[1,] <- 1 ) # column store: slow
system.time( fm[,1] <- 1 ) # column store: fast
# even more pronounced difference for caching="mmeachflush"
Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

20
Hybrid copying semantics: physical and virtual attributes
x <- ff(1:12, dim=c(3,4))
x
> str(physical(x))
List of 9
$ vmode
: chr "integer"
ff(); vmode()
$ maxlength: int 12
ff(); maxlength()
$ pattern : chr "ff"
pattern()<$ filename : chr
filename()<"D:/DOCUME~1/JENSOE~1/LOCALS~1/Temp/RtmpKDilKg/ff462a1d5a.ff"
$ pagesize : int 65536
open(, pagesize=)
$ finalizer: chr "delete"
finalizer()<$ finonexit: logi TRUE
ff()
$ readonly : logi FALSE
open(, readonly=)
$ caching : chr "mmnoflush"
open(, caching=)
> str(virtual(x))
List of 4
$ Length
: int 12
$ Dim
: int [1:2] 3 4
$ Dimorder : int [1:2] 1 2
$ Symmetric: logi FALSE
Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

length()<dim()<dimorder()<ff(); symmetric()
21
Hybrid coyping semantics in action: different virtual ‘views’ into same ff
a <- ff(1:12, dim=c(3,4))
b <- a
dim(b) <- c(4,3)
dimorder(b) <- c(2:1)
> a
ff (open)
[,1]
[1,]
1
[2,]
2
[3,]
3
> b
ff (open)
[,1]
[1,]
1
[2,]
4
[3,]
7
[4,]
10
vt(a)
t(a)

integer length=12(12) dim=c(3,4) dimorder=c(1,2)
[,2] [,3] [,4]
4
7
10
a vir
5
8
11
tual
6
9
12
integer length=12(12) dim=c(4,3) dimorder=c(2,1)
[,2] [,3]
2
3
e
5
6
nspos
tra
8
9
11
12

# shortcut to virtually transpose
# == clone(vt(a))

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

22
Basic example: working with data.frames
# R example
id <- 1:12
gender <- sample(factor(c("male","female","unknown")), 12, TRUE)
rating <- matrix(sample(1:6, 12*10, TRUE), 12, 10)
colnames(rating) <- paste("r", 1:10, sep="")
df <- data.frame(id, gender, rating)
df[1:3,]

# ff example
fid <- as.ff(id); fgender <- as.ff(gender); frating <- as.ff(rating)
fdf <- ffdf(id=fid, gender=fgender, frating)
identical(df, fdf[,])
fdf[1:3,]
# data.frame
fdf[,1:4]
# data.frame
fdf[1:4]
# ffdf
fdf[]
# ffdf
fdf[[2]]
# ff
fdf$gender # ff

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

23
Advanced example: physical structure of data.frames
# R example
# remember that 'rating' was a matrix
# but data.frame has copied to columns (unless we use I(rating))
str(df)
# ff example
physical(fdf) # ffdf has *not* copied anything
# lets' physically copy
fdf2 <- ffdf(id=fid, gender=fgender, frating, ff_split=3)
physical(fdf2)
filename(fid)
filename(fdf$id)
filename(fdf2$id)
nrow(fdf2) <- 1e6
fdf2[1e6,] <- fdf[1,]
fdf2
nrow(fdf2) <- 12
# understand this error: pros and cons of embedded ff_matrix
nrow(fdf) <- 16
# understand what this does to the original fid and fgender
fdf3 <- fdf[1:2]
nrow(fdf3) <- 16
fgender
nrow(fdf3) <- 12
Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

24
Basic example: reading and writing csv
# R example
write.csv(df, file="df.csv")
cat(readLines("df.csv"), sep="n")
df2 <- read.csv(file="df.csv")
df2
# ff example
write.csv.ffdf
args(write.table.ffdf)

# we leverage R's original functions
# arguments of the workhorse function

write.csv.ffdf(fdf, file="fdf.csv")
cat(readLines("fdf.csv"), sep="n")
fdf2 <- read.csv.ffdf(file="fdf.csv")
fdf2
identical(fdf[,], fdf2[,])

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

25
Advanced example: physical specification when reading a csv
# ff example
vmode(fdf2)
args(read.table.ffdf)
fdf2 <- read.csv.ffdf(file="fdf.csv"
, asffdf_args = list(vmode=list(quad="gender", nibble=3:12))
, first.rows = 1
, next.rows = 4
, VERBOSE=TRUE)
fdf2
fdf3 <- read.csv.ffdf(file="fdf.csv"
, asffdf_args = list( vmode=list(nibble=3:12)
, ff_join=list(3:12)
)
)
fdf3
# understand "unknown factor values mapped to first level"
read.csv.ffdf(file="fdf.csv"
, asffdf_args = list(vmode=list(quad="gender", nibble=3:12))
, appendLevels = FALSE
, first.rows = 2
, VERBOSE=TRUE)

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

26
Basic example: file locations and file survival
# R example: objects are not permanent
# rm(df) simply removes the object
# when closing R with q() everything is gone
# ff example: object data in files is POTENTIALLY permanent
# rm(fdf) just removes the R object,
# the next gc() triggers a finalizer which acts on the file
# when closing R with q()
# the attribute finonexit decides whether the finalizer is called
# finally fftempdir is unlinked
physical(fd)
dir(getOption("fftempdir"))
# changing file locations and finalizers
sapply(physical(fdf2), finalizer)
filename(fdf2)
# filename(ff) <- changes one ff
pattern(fdf2) <- "./cwdpat_"
filename(fdf2)
# ./ renamed to cwdpat_ in getwd()
sapply(physical(fdf2), finalizer) # AND set finalizers to "close"
pattern(fdf2) <- "temppat_"
filename(fdf2)
# renamed to temppat_ in fftempdir
sapply(physical(fdf2), finalizer) # AND set finalizers to "delete"
Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

27
Closing example: managing ff archives
# R example
# save.image() saves all R objects (but not ff files)
# load() restores all R objects (if ff files exist, ff objects work)
# ff example
# get rid of the large objects (takes too long for demo)
delete(frf); rm(frf)
delete(fd); rm(fd)
delete(fm); rm(fm)
rm(b, b1, b2, w, x, hp, hu)
ffsave.image(file="myff") # => myff.RData + myff.ffData
# close R
# open R
library(ff)
str(ffinfo(file="myff"))
ffload(file="myff", list="fdf2")
sapply(physical(fdf2), finalizer)
yaff <- ff(1:12)
ffsave(yaff, file="myff", add=TRUE)
ffdrop(file="myff")
Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

28
SOME DETAILS
NOT PRESENTED
IN THE SESSION

29
Before you start, make sure you read the important warnings in
library(ff)
?LimWarn

The default behavior of ff can be configured in options()
getOption("fftempdir")
getOption("ffextension")
getOption("fffinonexit")
getOption("ffdrop")
getOption("ffpagesize")
getOption("ffcaching")
getOption("ffbatchbytes")

==
==
==
==
==
==
==

"D:/.../Temp/RtmpidNQq9"
"ff"
TRUE
TRUE
# or always drop=FALSE
65536
# getdefaultpagesize()
"mmnoflush" # or "mmeachflush"
16104816
# default 1% of RAM Win32
# set this under Linux !!

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

30
Behavior on rm() and on q()
If we create or open an ff file, C++ resources are allocated, the file is opened and a
finalizer is attached to the external pointer, which will be executed at certain events to
release these resources.
Available finalizers
close
releases C++ resources and closes file (default for named files)
delete
releases C++ resources and deletes file (default for temp files)
deleteIfOpen
releases C++ resources and deletes file only if file was open
Finalizer is executed
rm(); gc()
at next garbage collection after removal of R-side object
q()
at the end of an R-session (only if finonexit=TRUE)
Wrap-up of temporary directory
.onUnload
getOption("fftempdir") is unliked and all ff-files therein deleted
You need to understand these mechanisms, otherwise you might suffer …
…
unexpected loss of ff files
…
GBs of garbage somewhere in temporary directories
Check and set the defaults to your needs …
getOption("fffinonexit")
…
Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

31
Update, cloning and coercion
# plug in temporary result into original ff object
update(origff, from=tmpff, delete=TRUE)
# deep copy with no shared attributes
y <- clone(x)
# three variants to cache complete ff object
# into R-side RAM and write back to disk later
# variant deleting ff
ramobj <- as.ram(ffobj); delete(ffobj)
# some operations purely in RAM
ffobj <- as.ff(ramobj)
# variant retaining ff
ramobj <- as.ram(ffobj); close(ffobj)
# some operations purely in RAM
ffobj <- as.ff(ramobj, overwrite=TRUE)
# variant using update
ramobj <- as.ram(ffobj)
update(ffobj, from=ramobj)
Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

32
Atomic access functions, methods and generics

reading
single element
contiguous vector
indexed access
with HIP and vw
for ram compatibility

writing

combined
reading and writing

get.ff

set.ff

getset.ff

read.ff

write.ff

readwrite.ff

[

[<-.(,add=FALSE)

swap(,add=FALSE)

add(x,i,value)

swap.default

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

33
HIP optimized disk access

Hybrid Index Preprocessing (HIP)
ffobj[1:1000000000] will silently submit the index information to
as.hi(quote(1:1000000000)) which does the HIP:
rather parses than expands index expressions like 1:1000000000
stores index information either plain or as rle-packed index
increments (therefore 'hybrid')
sorts the index and stores information to restore order
Benefits
minimized RAM requirements for index information
all elements of ff file accessed in strictly increasing position
Costs
RAM needed for HI may double RAM for plain index (due to re-ordering)
RAM needed during HIP may be higher than final index (due to sorting)
Currently preprocessing is almost purely in R-code
(only critical parts in fast C-code: intisasc, intisdesc, inrle)

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

34
Random access to permanent vector of integer: cost of HIP in [.ff and [<-.ff
is well-invested compared to naive access in get.ff and set.ff
seconds

set.ff

[<-.ff

[<-.ff

0.4
6.478

0.029

set.ff

[<-.ff

write 100K/512M mmeachflush
44.021

20.374

set.ff

[<-.ff

10
0

0

0.00

2

0.02

20

4

30

0.04

6

40
set.ff

[<-.ff

0.0

0.2
0.0
0.004

0.325

set.ff

write 100K/512K mmeachflush
8

0.038

0.692

0.2

0.4

0.6
0.4
0.2
0.0
0.06

write 100K/512K

0.8

[<-.ff

0.02

0.6

set.ff

0.544

read 100K/512M mmeachflush

50

0.02

0.6

0.374

read 100K/512K mmeachflush
0.8

0.8

read 100K/512K

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

35
Parsing of index expressions
# The parser knows ‘c()’ and ‘:’, nothing else
# [.ff calls as.hi like as.hi(quote(index.expression))
# efficient index expressions
a <- 1
b <- 100
as.hi(quote(c(a:b, 100:1000)))
# parsed (packed)
as.hi(quote(c(1000:100, 100:1))) # parsed and reversed (packed)
# neither ascending nor descending sequences
as.hi(quote(c(2:10,1))) # parsed, but then expanded and sorted
# plus RAM for re-ordering
# parsing aborted when finding expressions with length>16
x <- 1:100; as.hi(quote(x)) # x evaluated, then rle-packed
as.hi(quote((1:100)))
#() stopped here, ok in a[(1:100)]
# parsing skipped
as.hi(1:100)
# index expanded , then rle-packed
# parsing and packing skipped
as.hi(1:100, pack=FALSE)
# index expanded
as.hi(quote(1:100), pack=FALSE)
# index expanded
Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

36
RAM considerations
# ff is currently limited to length(ff)==.Machine$max.integer
# storing 370 MB integer data
> a <- ff(0L, dim=c(1000000,100))
# obviously 370 MB for return value
b <- a[]
# zero RAM for index or recycling
a[] <- 1
# thanks to recycling in C
a[] <- 0:1
a[1:100000000] <- 0:1 # thanks to HIP
a[100000000:1] <- 1:0
# Attention: 370 MB for recycled value
a[, bydim=c(2,1)] <- 0:1
# don't do this
a[offset+(1:100000000)] <- 1 # better: a[(o+1):(o+n)] <- 1
# 5x 370MB during HIP
a[sample(100000000)] <- 1
a[sample(100000000)] <- 0:1

# Without chunking final costs are
# 370 MB index + 370 MB re-order
# dito + 370 MB recycling

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

37
Lessons from RAM investigation

rle() requires up to 9x its input RAM*
without using structure() reduces to 7x RAM
intrle() uses an optimized C version, needs
up to 2x RAM and is by factor 50 faster.
Trick: intrle returns NULL if compression
achieved is worse than 33.3%. Thus the RAM
needed is maximal
- 1/1 for the input vector
- 1/3 for collecting values
- 1/3 for collecting lengths
- 1/3 buffer for copying to return value

* Measured in version 2.6.2
Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

38
A physical to virtual hierarchy

R.ff: cubelet
processing

virtual window

virtual dimension

virtual / physical

physical

vw(ff)<dim()
length()

the offset, window and rest components of
vw must match current length and dim

dim(ff) <- c(3,4)
dim must match current length
dimorder(ff) <- c(2,1)

length(ff) <- 12

ff <- ff(length=16)
maxlength(ff)==16 # readonly

Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts

physical length change

physical file size

39

Más contenido relacionado

La actualidad más candente

Python and R for quantitative finance
Python and R for quantitative financePython and R for quantitative finance
Python and R for quantitative financeLuca Sbardella
 
Data Structures for Statistical Computing in Python
Data Structures for Statistical Computing in PythonData Structures for Statistical Computing in Python
Data Structures for Statistical Computing in PythonWes McKinney
 
R programming groundup-basic-section-i
R programming groundup-basic-section-iR programming groundup-basic-section-i
R programming groundup-basic-section-iDr. Awase Khirni Syed
 
Data Storage Formats in Hadoop
Data Storage Formats in HadoopData Storage Formats in Hadoop
Data Storage Formats in HadoopBotond Balázs
 
R programming slides
R  programming slidesR  programming slides
R programming slidesPankaj Saini
 
Introduction to the R Statistical Computing Environment
Introduction to the R Statistical Computing EnvironmentIntroduction to the R Statistical Computing Environment
Introduction to the R Statistical Computing Environmentizahn
 
Scalable Data Analysis in R -- Lee Edlefsen
Scalable Data Analysis in R -- Lee EdlefsenScalable Data Analysis in R -- Lee Edlefsen
Scalable Data Analysis in R -- Lee EdlefsenRevolution Analytics
 
Data engineering and analytics using python
Data engineering and analytics using pythonData engineering and analytics using python
Data engineering and analytics using pythonPurna Chander
 
Parquet - Data I/O - Philadelphia 2013
Parquet - Data I/O - Philadelphia 2013Parquet - Data I/O - Philadelphia 2013
Parquet - Data I/O - Philadelphia 2013larsgeorge
 
Hadoop Summit 2015: Performance Optimization at Scale, Lessons Learned at Twi...
Hadoop Summit 2015: Performance Optimization at Scale, Lessons Learned at Twi...Hadoop Summit 2015: Performance Optimization at Scale, Lessons Learned at Twi...
Hadoop Summit 2015: Performance Optimization at Scale, Lessons Learned at Twi...Alex Levenson
 
pandas: a Foundational Python Library for Data Analysis and Statistics
pandas: a Foundational Python Library for Data Analysis and Statisticspandas: a Foundational Python Library for Data Analysis and Statistics
pandas: a Foundational Python Library for Data Analysis and StatisticsWes McKinney
 
The History and Use of R
The History and Use of RThe History and Use of R
The History and Use of RAnalyticsWeek
 
R programming for data science
R programming for data scienceR programming for data science
R programming for data scienceSovello Hildebrand
 
How to use Parquet as a basis for ETL and analytics
How to use Parquet as a basis for ETL and analyticsHow to use Parquet as a basis for ETL and analytics
How to use Parquet as a basis for ETL and analyticsJulien Le Dem
 
R language tutorial
R language tutorialR language tutorial
R language tutorialDavid Chiu
 
Parquet Hadoop Summit 2013
Parquet Hadoop Summit 2013Parquet Hadoop Summit 2013
Parquet Hadoop Summit 2013Julien Le Dem
 
Why R? A Brief Introduction to the Open Source Statistics Platform
Why R? A Brief Introduction to the Open Source Statistics PlatformWhy R? A Brief Introduction to the Open Source Statistics Platform
Why R? A Brief Introduction to the Open Source Statistics PlatformSyracuse University
 

La actualidad más candente (20)

Python and R for quantitative finance
Python and R for quantitative financePython and R for quantitative finance
Python and R for quantitative finance
 
Data Structures for Statistical Computing in Python
Data Structures for Statistical Computing in PythonData Structures for Statistical Computing in Python
Data Structures for Statistical Computing in Python
 
R programming groundup-basic-section-i
R programming groundup-basic-section-iR programming groundup-basic-section-i
R programming groundup-basic-section-i
 
Data Storage Formats in Hadoop
Data Storage Formats in HadoopData Storage Formats in Hadoop
Data Storage Formats in Hadoop
 
R programming slides
R  programming slidesR  programming slides
R programming slides
 
An Intoduction to R
An Intoduction to RAn Intoduction to R
An Intoduction to R
 
Introduction to the R Statistical Computing Environment
Introduction to the R Statistical Computing EnvironmentIntroduction to the R Statistical Computing Environment
Introduction to the R Statistical Computing Environment
 
Scalable Data Analysis in R -- Lee Edlefsen
Scalable Data Analysis in R -- Lee EdlefsenScalable Data Analysis in R -- Lee Edlefsen
Scalable Data Analysis in R -- Lee Edlefsen
 
Data engineering and analytics using python
Data engineering and analytics using pythonData engineering and analytics using python
Data engineering and analytics using python
 
R tutorial
R tutorialR tutorial
R tutorial
 
Neo4j vs giraph
Neo4j vs giraphNeo4j vs giraph
Neo4j vs giraph
 
Parquet - Data I/O - Philadelphia 2013
Parquet - Data I/O - Philadelphia 2013Parquet - Data I/O - Philadelphia 2013
Parquet - Data I/O - Philadelphia 2013
 
Hadoop Summit 2015: Performance Optimization at Scale, Lessons Learned at Twi...
Hadoop Summit 2015: Performance Optimization at Scale, Lessons Learned at Twi...Hadoop Summit 2015: Performance Optimization at Scale, Lessons Learned at Twi...
Hadoop Summit 2015: Performance Optimization at Scale, Lessons Learned at Twi...
 
pandas: a Foundational Python Library for Data Analysis and Statistics
pandas: a Foundational Python Library for Data Analysis and Statisticspandas: a Foundational Python Library for Data Analysis and Statistics
pandas: a Foundational Python Library for Data Analysis and Statistics
 
The History and Use of R
The History and Use of RThe History and Use of R
The History and Use of R
 
R programming for data science
R programming for data scienceR programming for data science
R programming for data science
 
How to use Parquet as a basis for ETL and analytics
How to use Parquet as a basis for ETL and analyticsHow to use Parquet as a basis for ETL and analytics
How to use Parquet as a basis for ETL and analytics
 
R language tutorial
R language tutorialR language tutorial
R language tutorial
 
Parquet Hadoop Summit 2013
Parquet Hadoop Summit 2013Parquet Hadoop Summit 2013
Parquet Hadoop Summit 2013
 
Why R? A Brief Introduction to the Open Source Statistics Platform
Why R? A Brief Introduction to the Open Source Statistics PlatformWhy R? A Brief Introduction to the Open Source Statistics Platform
Why R? A Brief Introduction to the Open Source Statistics Platform
 

Destacado

Taking R to the Limit (High Performance Computing in R), Part 2 -- Large Data...
Taking R to the Limit (High Performance Computing in R), Part 2 -- Large Data...Taking R to the Limit (High Performance Computing in R), Part 2 -- Large Data...
Taking R to the Limit (High Performance Computing in R), Part 2 -- Large Data...Ryan Rosario
 
Taking R to the Limit (High Performance Computing in R), Part 1 -- Paralleliz...
Taking R to the Limit (High Performance Computing in R), Part 1 -- Paralleliz...Taking R to the Limit (High Performance Computing in R), Part 1 -- Paralleliz...
Taking R to the Limit (High Performance Computing in R), Part 1 -- Paralleliz...Ryan Rosario
 
Massive scale analytics with Stratosphere using R
Massive scale analytics with Stratosphere using RMassive scale analytics with Stratosphere using R
Massive scale analytics with Stratosphere using RJose Luis Lopez Pino
 
Parallel Computing with R
Parallel Computing with RParallel Computing with R
Parallel Computing with RAbhirup Mallik
 
NumPy and SciPy for Data Mining and Data Analysis Including iPython, SciKits,...
NumPy and SciPy for Data Mining and Data Analysis Including iPython, SciKits,...NumPy and SciPy for Data Mining and Data Analysis Including iPython, SciKits,...
NumPy and SciPy for Data Mining and Data Analysis Including iPython, SciKits,...Ryan Rosario
 
Building a scalable data science platform with R
Building a scalable data science platform with RBuilding a scalable data science platform with R
Building a scalable data science platform with RRevolution Analytics
 
Heatmaps best practices Strata Hadoop
Heatmaps best practices Strata HadoopHeatmaps best practices Strata Hadoop
Heatmaps best practices Strata HadoopEdwin de Jonge
 
Accessing R from Python using RPy2
Accessing R from Python using RPy2Accessing R from Python using RPy2
Accessing R from Python using RPy2Ryan Rosario
 
January 2016 Meetup: Speeding up (big) data manipulation with data.table package
January 2016 Meetup: Speeding up (big) data manipulation with data.table packageJanuary 2016 Meetup: Speeding up (big) data manipulation with data.table package
January 2016 Meetup: Speeding up (big) data manipulation with data.table packageZurich_R_User_Group
 
Learning R and Teaching R
Learning R and Teaching RLearning R and Teaching R
Learning R and Teaching RAjay Ohri
 
Chunked, dplyr for large text files
Chunked, dplyr for large text filesChunked, dplyr for large text files
Chunked, dplyr for large text filesEdwin de Jonge
 
ESWC SS 2013 - Wednesday Tutorial Marko Grobelnik: Introduction to Big Data A...
ESWC SS 2013 - Wednesday Tutorial Marko Grobelnik: Introduction to Big Data A...ESWC SS 2013 - Wednesday Tutorial Marko Grobelnik: Introduction to Big Data A...
ESWC SS 2013 - Wednesday Tutorial Marko Grobelnik: Introduction to Big Data A...eswcsummerschool
 
Statistics for data scientists
Statistics for  data scientistsStatistics for  data scientists
Statistics for data scientistsAjay Ohri
 
Learn Power BI with Power Pivot, Power Query, Power View, Power Map and Q&A
Learn Power BI with Power Pivot, Power Query, Power View, Power Map and Q&ALearn Power BI with Power Pivot, Power Query, Power View, Power Map and Q&A
Learn Power BI with Power Pivot, Power Query, Power View, Power Map and Q&AVishal Pawar
 
Visual Design with Data
Visual Design with DataVisual Design with Data
Visual Design with DataSeth Familian
 

Destacado (17)

Taking R to the Limit (High Performance Computing in R), Part 2 -- Large Data...
Taking R to the Limit (High Performance Computing in R), Part 2 -- Large Data...Taking R to the Limit (High Performance Computing in R), Part 2 -- Large Data...
Taking R to the Limit (High Performance Computing in R), Part 2 -- Large Data...
 
Taking R to the Limit (High Performance Computing in R), Part 1 -- Paralleliz...
Taking R to the Limit (High Performance Computing in R), Part 1 -- Paralleliz...Taking R to the Limit (High Performance Computing in R), Part 1 -- Paralleliz...
Taking R to the Limit (High Performance Computing in R), Part 1 -- Paralleliz...
 
Massive scale analytics with Stratosphere using R
Massive scale analytics with Stratosphere using RMassive scale analytics with Stratosphere using R
Massive scale analytics with Stratosphere using R
 
Parallel Computing with R
Parallel Computing with RParallel Computing with R
Parallel Computing with R
 
RHadoop, R meets Hadoop
RHadoop, R meets HadoopRHadoop, R meets Hadoop
RHadoop, R meets Hadoop
 
NumPy and SciPy for Data Mining and Data Analysis Including iPython, SciKits,...
NumPy and SciPy for Data Mining and Data Analysis Including iPython, SciKits,...NumPy and SciPy for Data Mining and Data Analysis Including iPython, SciKits,...
NumPy and SciPy for Data Mining and Data Analysis Including iPython, SciKits,...
 
Building a scalable data science platform with R
Building a scalable data science platform with RBuilding a scalable data science platform with R
Building a scalable data science platform with R
 
Heatmaps best practices Strata Hadoop
Heatmaps best practices Strata HadoopHeatmaps best practices Strata Hadoop
Heatmaps best practices Strata Hadoop
 
Accessing R from Python using RPy2
Accessing R from Python using RPy2Accessing R from Python using RPy2
Accessing R from Python using RPy2
 
January 2016 Meetup: Speeding up (big) data manipulation with data.table package
January 2016 Meetup: Speeding up (big) data manipulation with data.table packageJanuary 2016 Meetup: Speeding up (big) data manipulation with data.table package
January 2016 Meetup: Speeding up (big) data manipulation with data.table package
 
Learning R and Teaching R
Learning R and Teaching RLearning R and Teaching R
Learning R and Teaching R
 
The R Ecosystem
The R EcosystemThe R Ecosystem
The R Ecosystem
 
Chunked, dplyr for large text files
Chunked, dplyr for large text filesChunked, dplyr for large text files
Chunked, dplyr for large text files
 
ESWC SS 2013 - Wednesday Tutorial Marko Grobelnik: Introduction to Big Data A...
ESWC SS 2013 - Wednesday Tutorial Marko Grobelnik: Introduction to Big Data A...ESWC SS 2013 - Wednesday Tutorial Marko Grobelnik: Introduction to Big Data A...
ESWC SS 2013 - Wednesday Tutorial Marko Grobelnik: Introduction to Big Data A...
 
Statistics for data scientists
Statistics for  data scientistsStatistics for  data scientists
Statistics for data scientists
 
Learn Power BI with Power Pivot, Power Query, Power View, Power Map and Q&A
Learn Power BI with Power Pivot, Power Query, Power View, Power Map and Q&ALearn Power BI with Power Pivot, Power Query, Power View, Power Map and Q&A
Learn Power BI with Power Pivot, Power Query, Power View, Power Map and Q&A
 
Visual Design with Data
Visual Design with DataVisual Design with Data
Visual Design with Data
 

Similar a Managing large datasets in R – ff examples and concepts

A Hudi Live Event: Shaping a Database Experience within the Data Lake with Ap...
A Hudi Live Event: Shaping a Database Experience within the Data Lake with Ap...A Hudi Live Event: Shaping a Database Experience within the Data Lake with Ap...
A Hudi Live Event: Shaping a Database Experience within the Data Lake with Ap...nadine39280
 
Streaming Data in R
Streaming Data in RStreaming Data in R
Streaming Data in RRory Winston
 
Big Data Essentials meetup @ IBM Ljubljana 23.06.2015
Big Data Essentials meetup @ IBM Ljubljana 23.06.2015Big Data Essentials meetup @ IBM Ljubljana 23.06.2015
Big Data Essentials meetup @ IBM Ljubljana 23.06.2015Andrey Vykhodtsev
 
Text Mining Infrastructure in R
Text Mining Infrastructure in RText Mining Infrastructure in R
Text Mining Infrastructure in RAshraf Uddin
 
Standardizing on a single N-dimensional array API for Python
Standardizing on a single N-dimensional array API for PythonStandardizing on a single N-dimensional array API for Python
Standardizing on a single N-dimensional array API for PythonRalf Gommers
 
Big data distributed processing: Spark introduction
Big data distributed processing: Spark introductionBig data distributed processing: Spark introduction
Big data distributed processing: Spark introductionHektor Jacynycz García
 
Cloudera Impala Overview (via Scott Leberknight)
Cloudera Impala Overview (via Scott Leberknight)Cloudera Impala Overview (via Scott Leberknight)
Cloudera Impala Overview (via Scott Leberknight)Cloudera, Inc.
 
Cloudera Impala, updated for v1.0
Cloudera Impala, updated for v1.0Cloudera Impala, updated for v1.0
Cloudera Impala, updated for v1.0Scott Leberknight
 
Apache Hive for modern DBAs
Apache Hive for modern DBAsApache Hive for modern DBAs
Apache Hive for modern DBAsLuis Marques
 
Analytics Beyond RAM Capacity using R
Analytics Beyond RAM Capacity using RAnalytics Beyond RAM Capacity using R
Analytics Beyond RAM Capacity using RAlex Palamides
 
Lecture1_R.pdf
Lecture1_R.pdfLecture1_R.pdf
Lecture1_R.pdfBusyBird2
 
Modeling in R Programming Language for Beginers.ppt
Modeling in R Programming Language for Beginers.pptModeling in R Programming Language for Beginers.ppt
Modeling in R Programming Language for Beginers.pptanshikagoel52
 
Language-agnostic data analysis workflows and reproducible research
Language-agnostic data analysis workflows and reproducible researchLanguage-agnostic data analysis workflows and reproducible research
Language-agnostic data analysis workflows and reproducible researchAndrew Lowe
 
Apache Flink Overview at SF Spark and Friends
Apache Flink Overview at SF Spark and FriendsApache Flink Overview at SF Spark and Friends
Apache Flink Overview at SF Spark and FriendsStephan Ewen
 
Intro to hadoop ecosystem
Intro to hadoop ecosystemIntro to hadoop ecosystem
Intro to hadoop ecosystemGrzegorz Kolpuc
 
A Survey of Concurrency Constructs
A Survey of Concurrency ConstructsA Survey of Concurrency Constructs
A Survey of Concurrency ConstructsTed Leung
 

Similar a Managing large datasets in R – ff examples and concepts (20)

A Hudi Live Event: Shaping a Database Experience within the Data Lake with Ap...
A Hudi Live Event: Shaping a Database Experience within the Data Lake with Ap...A Hudi Live Event: Shaping a Database Experience within the Data Lake with Ap...
A Hudi Live Event: Shaping a Database Experience within the Data Lake with Ap...
 
Streaming Data in R
Streaming Data in RStreaming Data in R
Streaming Data in R
 
Big Data Essentials meetup @ IBM Ljubljana 23.06.2015
Big Data Essentials meetup @ IBM Ljubljana 23.06.2015Big Data Essentials meetup @ IBM Ljubljana 23.06.2015
Big Data Essentials meetup @ IBM Ljubljana 23.06.2015
 
Text Mining Infrastructure in R
Text Mining Infrastructure in RText Mining Infrastructure in R
Text Mining Infrastructure in R
 
Standardizing on a single N-dimensional array API for Python
Standardizing on a single N-dimensional array API for PythonStandardizing on a single N-dimensional array API for Python
Standardizing on a single N-dimensional array API for Python
 
Big data distributed processing: Spark introduction
Big data distributed processing: Spark introductionBig data distributed processing: Spark introduction
Big data distributed processing: Spark introduction
 
Cloudera Impala Overview (via Scott Leberknight)
Cloudera Impala Overview (via Scott Leberknight)Cloudera Impala Overview (via Scott Leberknight)
Cloudera Impala Overview (via Scott Leberknight)
 
Cloudera Impala, updated for v1.0
Cloudera Impala, updated for v1.0Cloudera Impala, updated for v1.0
Cloudera Impala, updated for v1.0
 
Apache Hive for modern DBAs
Apache Hive for modern DBAsApache Hive for modern DBAs
Apache Hive for modern DBAs
 
Analytics Beyond RAM Capacity using R
Analytics Beyond RAM Capacity using RAnalytics Beyond RAM Capacity using R
Analytics Beyond RAM Capacity using R
 
A gentle introduction to Oracle R Enterprise
A gentle introduction to Oracle R EnterpriseA gentle introduction to Oracle R Enterprise
A gentle introduction to Oracle R Enterprise
 
Lecture1_R.pdf
Lecture1_R.pdfLecture1_R.pdf
Lecture1_R.pdf
 
Lecture1_R.ppt
Lecture1_R.pptLecture1_R.ppt
Lecture1_R.ppt
 
Lecture1_R.ppt
Lecture1_R.pptLecture1_R.ppt
Lecture1_R.ppt
 
Lecture1 r
Lecture1 rLecture1 r
Lecture1 r
 
Modeling in R Programming Language for Beginers.ppt
Modeling in R Programming Language for Beginers.pptModeling in R Programming Language for Beginers.ppt
Modeling in R Programming Language for Beginers.ppt
 
Language-agnostic data analysis workflows and reproducible research
Language-agnostic data analysis workflows and reproducible researchLanguage-agnostic data analysis workflows and reproducible research
Language-agnostic data analysis workflows and reproducible research
 
Apache Flink Overview at SF Spark and Friends
Apache Flink Overview at SF Spark and FriendsApache Flink Overview at SF Spark and Friends
Apache Flink Overview at SF Spark and Friends
 
Intro to hadoop ecosystem
Intro to hadoop ecosystemIntro to hadoop ecosystem
Intro to hadoop ecosystem
 
A Survey of Concurrency Constructs
A Survey of Concurrency ConstructsA Survey of Concurrency Constructs
A Survey of Concurrency Constructs
 

Más de Ajay Ohri

Introduction to R ajay Ohri
Introduction to R ajay OhriIntroduction to R ajay Ohri
Introduction to R ajay OhriAjay Ohri
 
Introduction to R
Introduction to RIntroduction to R
Introduction to RAjay Ohri
 
Social Media and Fake News in the 2016 Election
Social Media and Fake News in the 2016 ElectionSocial Media and Fake News in the 2016 Election
Social Media and Fake News in the 2016 ElectionAjay Ohri
 
Download Python for R Users pdf for free
Download Python for R Users pdf for freeDownload Python for R Users pdf for free
Download Python for R Users pdf for freeAjay Ohri
 
Install spark on_windows10
Install spark on_windows10Install spark on_windows10
Install spark on_windows10Ajay Ohri
 
Ajay ohri Resume
Ajay ohri ResumeAjay ohri Resume
Ajay ohri ResumeAjay Ohri
 
National seminar on emergence of internet of things (io t) trends and challe...
National seminar on emergence of internet of things (io t)  trends and challe...National seminar on emergence of internet of things (io t)  trends and challe...
National seminar on emergence of internet of things (io t) trends and challe...Ajay Ohri
 
Tools and techniques for data science
Tools and techniques for data scienceTools and techniques for data science
Tools and techniques for data scienceAjay Ohri
 
How Big Data ,Cloud Computing ,Data Science can help business
How Big Data ,Cloud Computing ,Data Science can help businessHow Big Data ,Cloud Computing ,Data Science can help business
How Big Data ,Cloud Computing ,Data Science can help businessAjay Ohri
 
Training in Analytics and Data Science
Training in Analytics and Data ScienceTraining in Analytics and Data Science
Training in Analytics and Data ScienceAjay Ohri
 
Software Testing for Data Scientists
Software Testing for Data ScientistsSoftware Testing for Data Scientists
Software Testing for Data ScientistsAjay Ohri
 
A Data Science Tutorial in Python
A Data Science Tutorial in PythonA Data Science Tutorial in Python
A Data Science Tutorial in PythonAjay Ohri
 
How does cryptography work? by Jeroen Ooms
How does cryptography work?  by Jeroen OomsHow does cryptography work?  by Jeroen Ooms
How does cryptography work? by Jeroen OomsAjay Ohri
 
Using R for Social Media and Sports Analytics
Using R for Social Media and Sports AnalyticsUsing R for Social Media and Sports Analytics
Using R for Social Media and Sports AnalyticsAjay Ohri
 
Kush stats alpha
Kush stats alpha Kush stats alpha
Kush stats alpha Ajay Ohri
 
Analyze this
Analyze thisAnalyze this
Analyze thisAjay Ohri
 
Summer school python in spanish
Summer school python in spanishSummer school python in spanish
Summer school python in spanishAjay Ohri
 

Más de Ajay Ohri (20)

Introduction to R ajay Ohri
Introduction to R ajay OhriIntroduction to R ajay Ohri
Introduction to R ajay Ohri
 
Introduction to R
Introduction to RIntroduction to R
Introduction to R
 
Social Media and Fake News in the 2016 Election
Social Media and Fake News in the 2016 ElectionSocial Media and Fake News in the 2016 Election
Social Media and Fake News in the 2016 Election
 
Pyspark
PysparkPyspark
Pyspark
 
Download Python for R Users pdf for free
Download Python for R Users pdf for freeDownload Python for R Users pdf for free
Download Python for R Users pdf for free
 
Install spark on_windows10
Install spark on_windows10Install spark on_windows10
Install spark on_windows10
 
Ajay ohri Resume
Ajay ohri ResumeAjay ohri Resume
Ajay ohri Resume
 
National seminar on emergence of internet of things (io t) trends and challe...
National seminar on emergence of internet of things (io t)  trends and challe...National seminar on emergence of internet of things (io t)  trends and challe...
National seminar on emergence of internet of things (io t) trends and challe...
 
Tools and techniques for data science
Tools and techniques for data scienceTools and techniques for data science
Tools and techniques for data science
 
How Big Data ,Cloud Computing ,Data Science can help business
How Big Data ,Cloud Computing ,Data Science can help businessHow Big Data ,Cloud Computing ,Data Science can help business
How Big Data ,Cloud Computing ,Data Science can help business
 
Training in Analytics and Data Science
Training in Analytics and Data ScienceTraining in Analytics and Data Science
Training in Analytics and Data Science
 
Tradecraft
Tradecraft   Tradecraft
Tradecraft
 
Software Testing for Data Scientists
Software Testing for Data ScientistsSoftware Testing for Data Scientists
Software Testing for Data Scientists
 
Craps
CrapsCraps
Craps
 
A Data Science Tutorial in Python
A Data Science Tutorial in PythonA Data Science Tutorial in Python
A Data Science Tutorial in Python
 
How does cryptography work? by Jeroen Ooms
How does cryptography work?  by Jeroen OomsHow does cryptography work?  by Jeroen Ooms
How does cryptography work? by Jeroen Ooms
 
Using R for Social Media and Sports Analytics
Using R for Social Media and Sports AnalyticsUsing R for Social Media and Sports Analytics
Using R for Social Media and Sports Analytics
 
Kush stats alpha
Kush stats alpha Kush stats alpha
Kush stats alpha
 
Analyze this
Analyze thisAnalyze this
Analyze this
 
Summer school python in spanish
Summer school python in spanishSummer school python in spanish
Summer school python in spanish
 

Último

New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 

Último (20)

New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 

Managing large datasets in R – ff examples and concepts

  • 1. DISCLOSED Managing large datasets in R – ff examples and concepts Jens Oehlschlägel Vienna January 2010 This report contains public intellectual property. It may be used, circulated, quoted, or reproduced for distribution as a whole. Partial citations require a reference to the author and to the whole document and must not be put into a context which changes the original meaning. Even if you are not the intended recipient of this report, you are authorized and encouraged to read it and to act on it. Please note that you read this text on your own risk. It is your responsibility to draw appropriate conclusions. The author may neither be held responsible for any mistakes the text might contain nor for any actions that other people carry out after reading this text.
  • 2. Summary The statistical interpreter R is hungry for RAM and therefore limited to dataset sizes much smaller than available RAM. R packages 'bit' and 'ff' provide the basic infrastructure to handle large data problems in R. In this session we give an introduction into 'bit' and 'ff' – interweaving working examples with short explanation of the most important concepts. Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 1
  • 3. An open-source vision for R give free access to excellent statistical software to everyone … … for processing large datasets even with standard hardware Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 2
  • 4. Design goals: base packages for large data objects in R large data • many objects (sum(sizes) > RAM and …) • large objects (size > RAM and virtual address space limitations) "the cloud" standard HW • limited RAM (or enjoy speedups) • single disk (or enjoy RAID) • single processor (or shared processing) minimal RAM • required RAM << maximum RAM • be able to process large data in background maximum performance • close to in-RAM performance if size < RAM (system cache) • still able to process if size > RAM • avoid costs of redundant access to disk (time, electricity, CO2) Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 3
  • 5. Authors Daniel Adler dadler@uni-goettingen.de package rgl package ff since 1.0 dyncall project: dynamic C interface in work: dynamic compilation and streaming for R Jens Oehlschlägel Jens_Oehlschlaegel@truecluster.com package bit since 1.0 package ff since 2.0 prototype R.ff Development http://r-forge.r-project.org/projects/ff/ Production http://cran.r-project.org/web/packages/ff Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 4
  • 6. Statistical methods that can be scaled to large data problems using the infrastructure provided by packages bit and ff Basic infrastructure for large objects Basic infrastructure for chunking Reading and writing csv files packages bit and ff packages bit and ff (chunked sequential access with package ff) Data transformation (chunked and partially parallelized with package R.ff) Math and probability distributions (chunked and parallelized with package R.ff) Data filtering (chunked and parallelized, supported by bit and ff) Descriptive statistics on chunks (chunked and parallelized, supported by ff) Biglm and bigglm (chunked fitting with package biglm) Bootstrapping (chunked and parallelized random access) Bagged predictive modelling (chunked and parallelized random access) Bagged clustering (chunked and parallelized random access with truecluster) Likelihood maximization (chunked and parallelized sequential access) EM-algorithm (chunked and partially parallelized) Some linear algebra (chunked processing with package R.ff) • matrix transpose • matrix multiplication new today: • matrix inversion ffsave / ffload • svd for many rows and few columns Saving and loading ff archives (incremental save and selective load with package ff) Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 5
  • 7. Infrastructure in bit and ff addresses a complexity of performance-critical topics Complexities in scope of ff and bit • virtual objects by reference • disk based objects • memory efficient data types • memory efficient subscript types • fast chunk access methods • fast index (pre)processing • chunk processing infrastructure • large csv import/export • large data management Complexities partially in scope • parallel processing Complexities in scope of R.ff • basic processing of large objects • some linear algebra for large objects avoid copying and RAM duplication manage temporary and permanent objects on disk minimize storage requirements of data minimize storage requirements of subscripts allow fast (random) access to disk – for chunks of data minimize or avoid cost of subscript preparation foundation for efficient chunked processing interface large datasets conveniently manage all files behind ff parallel access to large datasets (without locking) elementwise operations and more t, vt, matmul, matinv, few column svd Currently not in scope • full linear algebra for large objects Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts observe package bigmemory 6
  • 8. Basic memory organisation R choice of fs write cache SSD ? RAID ? filesystem cache Hard Disk Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 7
  • 9. Architecture behind packages ff and bit implements several performance optimizations R frontend C interface C++ backend Hybrid Index Preprocessing ... … fast access methods … … memory mapped compressed pages • HIP • C-code accelerating • Tunable pagesize and • • • – parsing of index expressions instead of memory consuming evaluation – ordering of access positions and re-ordering of returned values – rapid rle packing of indices if and only if rle representation uses less memory compared to raw storage Hybrid copying semantics – virtual dim/dimorder() – virtual windows vw() – virtual transpose vt() New performance generics – clone(), update(), swap(), add(), chunk(), bigsample() Efficient coercions • • is.unsorted() and rle() for integers: intisasc(), intisdesc(), intrle() C-code for looping over hybrid index can handle mixed raw and rle packed indices in arrays and avoids multiplication C-code for looping over bit: outer loop fixes word in processor cache, inner loop handles bits Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts system caching= c("mmnoflush", "mmeachflush") • Custom datatype bitlevel en/decoding, ‚add‘ arithmetics and NA handling • Ported to Windows, Mac OS, Linux and BSDs • Large File Support (>2GB) on Linux • Paged shared memory allows parallel processing • Fast creation and modification of large files on sparse filesystems 8
  • 10. Basic example: creating atomic vectors # R example ri <- integer(10) # ff examples library(ff) fi <- ff(vmode="integer", length=10) fb <- ff(vmode="byte", length=10) rb <- byte(10) # in R this is integer fb <- ff(rb) vmode(ri) vmode(fi) vmode(rb) vmode(fb) cbind(.rambytes, .ffbytes)[c("integer","byte"),] ?vmode Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 9
  • 11. Atomic data types supported by ff implemented not implemented vmode(x) boolean 1 bit logical no NA logical 2 bit logical with NA quad 2 bit unsigned integer no NA nibble 4 bit unsigned integer no NA byte 8 bit signed integer ubyte 8 bit unsigned integer short 16 bit signed integer ushort 16 bit unsigned integer integer 32 bit signed integer single no NA 32 bit float double with NA 64 bit float complex raw character with NA no NA with NA 2x64 bit float 8 bit unsigned char fixed widths, tbd. Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts Compounds factor ordered custom compounds •Date •POSIXct •POSIXlt (not atomic) 10
  • 12. Advanced example: creating atomic vectors # R example rf <- factor(levels= c("A","T","G","C")) length(rf) <- 10 rf # ff examples frf <- ff(rf) length(frf) <- 1e8 frf frf[11:1e8] <- NA ff(vmode="quad", length=1e8, levels=c("A","T","G","C")) ff(vmode="quad", length=10 , levels=c("A","B","C","D"), ordered=TRUE) ff(Sys.Date()+0:9, length=10) ff(Sys.time()+0:9, length=10) ff(0:9, ramclass="Date") ff(0:9, ramclass=c("POSIXt", "POSIXct")) str(ff(as.POSIXct(as.POSIXlt(Sys.time(), "GMT")), length=12)) Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 11
  • 13. Limiting R's RAM consumption through chunked processing # ff example > str(chunk(fd)) List of 50 $ :Class 'ri' int [1:3] 1 2000000 100000000 $ :Class 'ri' int [1:3] 2000001 4000000 100000000 $ :Class 'ri' int [1:3] 4000001 6000000 100000000 [snipped] # simple as seq but balancing chunk size > args(chunk.default) function (from = NULL, to = NULL, by = NULL, length.out = NULL , along.with = NULL, overlap = 0L, method = c("bbatch", "seq"), ...) # automatic calculation of chunk size > args(chunk.ff_vector) function (x , RECORDBYTES = .rambytes[vmode(x)] , BATCHBYTES = getOption("ffbatchbytes") , ...) # by default limited at 1% of available RAM > getOption("ffbatchbytes") / 1024^2 / memory.limit() [1] 0.01 Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 12
  • 14. Basic example: working with atomic vectors # R example rd <- double(100) rd[] <- runif(100) # write rd[] # this is the proper non-lazy way to read # ff example fd <- ff(vmode="double", length=1e8) system.time( for (i in chunk(fd)) fd[i] <- runif(sum(i)) ) system.time( s <- lapply( chunk(fd) , function(i)quantile(fd[i], c(0.05, 0.95)) ) ) crbind(s) Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 13
  • 15. Negligible RAM duplication for parallel execution on ff objects R choice of fs write cache SSD ? RAID ? R R R R filesystem cache Hard Disk Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 14
  • 16. Advanced example: working parallel with atomic vectors library(snowfall) finalizer(fd) # let slaves not delete fd on shutdown finalizer(fd) <- "close" sfInit(parallel=TRUE, cpus=2, type="SOCK") sfLibrary(ff) sfExport("fd") # do not export the same ff multiple times sfClusterEval(open(fd)) # explicitely opening avoids a gc problem system.time( sfLapply( chunk(fd), function(i){ fd[i] <- runif(sum(i)) invisible() }) ) system.time( s <- sfLapply( chunk(fd) , function(i) quantile(fd[i], c(0.05, 0.95)) ) ) sfClusterEval(close(fd)) # for completeness csummary(s) sfStop() Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 15
  • 17. Supported index expressions implemeneted not implemented x <- ff(1:12, dim=c(3,4), dimnames=list(letters[1:3], NULL)) i <- as.bit(c(TRUE, FALSE, FALSE)) expression Example (which) positive integers x[ 1, 1] negative integers x[ -(2:12) ] logical x[ c(TRUE, FALSE, FALSE), 1] character x[ "a", 1] integer matrices x[ rbind(c(1,1)) ] bit x[ i & i, 1] bitwhich x[ as.bitwhich(i), 1] range index x[ ri(1,1), 1] hybrid index ff's internal index x[ as.hi(1) ,1] zeros x[ 0 ] NAs x[ NA ] Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 16
  • 18. Basic example: working with bit filters # R example l <- rd > 0.99 rd[l] 1e8 * .rambytes["logical"] / (1024^2) # 381 MB for logical # ff example b1 <- b2 <- bit(length(fd)) system.time( b1[] <- c(FALSE, TRUE) ) system.time( for (i in chunk(fd)) b2[i] <- fd[i] > 0.99 ) system.time( b <- b1 & b2 ) object.size(b) / (1024^2) system.time( x <- fd[b] ) x[1:10] sum(b) / length(b) # less dense than 1/32 w <- as.bitwhich(b) sum(w) / length(w) object.size(w) / (1024^2) system.time( x <- fd[w] ) x[1:10] Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 17
  • 19. Advanced example: working with hybrid indexing # ff example hp <- as.hi(b) # ignores pack=FALSE object.size(hp) / (1024^2) system.time( x <- fd[hp] ) x[1:10] hu <- as.hi(w, pack=FALSE) object.size(hu) / (1024^2) system.time( x <- fd[hu] ) x[1:10] # Don't do as.hi(1:1e8) # Do as.hi(quote(1:1e8)) hi(1, 1e8) ri(1, 1e8) chunk(1, 1e8, by=1e8) Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 18
  • 20. Supported data structures soon on CRAN prototype available not yet implemented example class(x) vector ff(1:12) c("ff_vector","ff") array ff(1:12, dim=c(2,2,3)) matrix ff(1:12, dim=c(3,4)) c("ff_array","ff") c("ff_matrix","ff_array","ff") data.frame ffdf(sex=a, age=b) c("ffdf","ff") symmetric matrix ff(1:6, dim=c(3,3) with free diag , symm=TRUE, fixdiag=NULL) symmetric matrix ff(1:3, dim=c(3,3) with fixed diag , symm=TRUE, fixdiag=0) distance matrix c("ff_dist","ff_symm","ff") mixed type arrays instead of data.frames c("ff_mixed", "ff") Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 19
  • 21. Basic example: working with arrays (and matrices) # R example: physically stored by column array(1:12, dim=c(3,4)) # read by column matrix(1:12, 3,4, byrow=TRUE) # read by row # ff example: physically stored by column – like columnar OLAP ff(1:12, dim=c(3,4)) # read by column ff(1:12, dim=c(3,4), bydim=c(2,1)) # read by row # ff example: physically stored by row – like OLTP database ff(1:12, dim=c(3,4), dimorder=c(2,1)) # read by column ff(1:12, dim=c(3,4), dimorder=c(2,1), bydim=c(2,1)) # read by row fm <- ff(1:12, dim=c(3,4), dimorder=c(2,1)) get.ff(fm, 1:12) # note the physical order fm[1:12] # [. exhibits standard R behaviour ncol(fm) <- 1e8 # not possible with this dimorder nrow(fm) <- 1e8 # possible with this dimorder fm <- ff(vmode="double", dim=c(1e4, 1e4)) system.time( fm[1,] <- 1 ) # column store: slow system.time( fm[,1] <- 1 ) # column store: fast # even more pronounced difference for caching="mmeachflush" Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 20
  • 22. Hybrid copying semantics: physical and virtual attributes x <- ff(1:12, dim=c(3,4)) x > str(physical(x)) List of 9 $ vmode : chr "integer" ff(); vmode() $ maxlength: int 12 ff(); maxlength() $ pattern : chr "ff" pattern()<$ filename : chr filename()<"D:/DOCUME~1/JENSOE~1/LOCALS~1/Temp/RtmpKDilKg/ff462a1d5a.ff" $ pagesize : int 65536 open(, pagesize=) $ finalizer: chr "delete" finalizer()<$ finonexit: logi TRUE ff() $ readonly : logi FALSE open(, readonly=) $ caching : chr "mmnoflush" open(, caching=) > str(virtual(x)) List of 4 $ Length : int 12 $ Dim : int [1:2] 3 4 $ Dimorder : int [1:2] 1 2 $ Symmetric: logi FALSE Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts length()<dim()<dimorder()<ff(); symmetric() 21
  • 23. Hybrid coyping semantics in action: different virtual ‘views’ into same ff a <- ff(1:12, dim=c(3,4)) b <- a dim(b) <- c(4,3) dimorder(b) <- c(2:1) > a ff (open) [,1] [1,] 1 [2,] 2 [3,] 3 > b ff (open) [,1] [1,] 1 [2,] 4 [3,] 7 [4,] 10 vt(a) t(a) integer length=12(12) dim=c(3,4) dimorder=c(1,2) [,2] [,3] [,4] 4 7 10 a vir 5 8 11 tual 6 9 12 integer length=12(12) dim=c(4,3) dimorder=c(2,1) [,2] [,3] 2 3 e 5 6 nspos tra 8 9 11 12 # shortcut to virtually transpose # == clone(vt(a)) Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 22
  • 24. Basic example: working with data.frames # R example id <- 1:12 gender <- sample(factor(c("male","female","unknown")), 12, TRUE) rating <- matrix(sample(1:6, 12*10, TRUE), 12, 10) colnames(rating) <- paste("r", 1:10, sep="") df <- data.frame(id, gender, rating) df[1:3,] # ff example fid <- as.ff(id); fgender <- as.ff(gender); frating <- as.ff(rating) fdf <- ffdf(id=fid, gender=fgender, frating) identical(df, fdf[,]) fdf[1:3,] # data.frame fdf[,1:4] # data.frame fdf[1:4] # ffdf fdf[] # ffdf fdf[[2]] # ff fdf$gender # ff Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 23
  • 25. Advanced example: physical structure of data.frames # R example # remember that 'rating' was a matrix # but data.frame has copied to columns (unless we use I(rating)) str(df) # ff example physical(fdf) # ffdf has *not* copied anything # lets' physically copy fdf2 <- ffdf(id=fid, gender=fgender, frating, ff_split=3) physical(fdf2) filename(fid) filename(fdf$id) filename(fdf2$id) nrow(fdf2) <- 1e6 fdf2[1e6,] <- fdf[1,] fdf2 nrow(fdf2) <- 12 # understand this error: pros and cons of embedded ff_matrix nrow(fdf) <- 16 # understand what this does to the original fid and fgender fdf3 <- fdf[1:2] nrow(fdf3) <- 16 fgender nrow(fdf3) <- 12 Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 24
  • 26. Basic example: reading and writing csv # R example write.csv(df, file="df.csv") cat(readLines("df.csv"), sep="n") df2 <- read.csv(file="df.csv") df2 # ff example write.csv.ffdf args(write.table.ffdf) # we leverage R's original functions # arguments of the workhorse function write.csv.ffdf(fdf, file="fdf.csv") cat(readLines("fdf.csv"), sep="n") fdf2 <- read.csv.ffdf(file="fdf.csv") fdf2 identical(fdf[,], fdf2[,]) Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 25
  • 27. Advanced example: physical specification when reading a csv # ff example vmode(fdf2) args(read.table.ffdf) fdf2 <- read.csv.ffdf(file="fdf.csv" , asffdf_args = list(vmode=list(quad="gender", nibble=3:12)) , first.rows = 1 , next.rows = 4 , VERBOSE=TRUE) fdf2 fdf3 <- read.csv.ffdf(file="fdf.csv" , asffdf_args = list( vmode=list(nibble=3:12) , ff_join=list(3:12) ) ) fdf3 # understand "unknown factor values mapped to first level" read.csv.ffdf(file="fdf.csv" , asffdf_args = list(vmode=list(quad="gender", nibble=3:12)) , appendLevels = FALSE , first.rows = 2 , VERBOSE=TRUE) Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 26
  • 28. Basic example: file locations and file survival # R example: objects are not permanent # rm(df) simply removes the object # when closing R with q() everything is gone # ff example: object data in files is POTENTIALLY permanent # rm(fdf) just removes the R object, # the next gc() triggers a finalizer which acts on the file # when closing R with q() # the attribute finonexit decides whether the finalizer is called # finally fftempdir is unlinked physical(fd) dir(getOption("fftempdir")) # changing file locations and finalizers sapply(physical(fdf2), finalizer) filename(fdf2) # filename(ff) <- changes one ff pattern(fdf2) <- "./cwdpat_" filename(fdf2) # ./ renamed to cwdpat_ in getwd() sapply(physical(fdf2), finalizer) # AND set finalizers to "close" pattern(fdf2) <- "temppat_" filename(fdf2) # renamed to temppat_ in fftempdir sapply(physical(fdf2), finalizer) # AND set finalizers to "delete" Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 27
  • 29. Closing example: managing ff archives # R example # save.image() saves all R objects (but not ff files) # load() restores all R objects (if ff files exist, ff objects work) # ff example # get rid of the large objects (takes too long for demo) delete(frf); rm(frf) delete(fd); rm(fd) delete(fm); rm(fm) rm(b, b1, b2, w, x, hp, hu) ffsave.image(file="myff") # => myff.RData + myff.ffData # close R # open R library(ff) str(ffinfo(file="myff")) ffload(file="myff", list="fdf2") sapply(physical(fdf2), finalizer) yaff <- ff(1:12) ffsave(yaff, file="myff", add=TRUE) ffdrop(file="myff") Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 28
  • 31. Before you start, make sure you read the important warnings in library(ff) ?LimWarn The default behavior of ff can be configured in options() getOption("fftempdir") getOption("ffextension") getOption("fffinonexit") getOption("ffdrop") getOption("ffpagesize") getOption("ffcaching") getOption("ffbatchbytes") == == == == == == == "D:/.../Temp/RtmpidNQq9" "ff" TRUE TRUE # or always drop=FALSE 65536 # getdefaultpagesize() "mmnoflush" # or "mmeachflush" 16104816 # default 1% of RAM Win32 # set this under Linux !! Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 30
  • 32. Behavior on rm() and on q() If we create or open an ff file, C++ resources are allocated, the file is opened and a finalizer is attached to the external pointer, which will be executed at certain events to release these resources. Available finalizers close releases C++ resources and closes file (default for named files) delete releases C++ resources and deletes file (default for temp files) deleteIfOpen releases C++ resources and deletes file only if file was open Finalizer is executed rm(); gc() at next garbage collection after removal of R-side object q() at the end of an R-session (only if finonexit=TRUE) Wrap-up of temporary directory .onUnload getOption("fftempdir") is unliked and all ff-files therein deleted You need to understand these mechanisms, otherwise you might suffer … … unexpected loss of ff files … GBs of garbage somewhere in temporary directories Check and set the defaults to your needs … getOption("fffinonexit") … Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 31
  • 33. Update, cloning and coercion # plug in temporary result into original ff object update(origff, from=tmpff, delete=TRUE) # deep copy with no shared attributes y <- clone(x) # three variants to cache complete ff object # into R-side RAM and write back to disk later # variant deleting ff ramobj <- as.ram(ffobj); delete(ffobj) # some operations purely in RAM ffobj <- as.ff(ramobj) # variant retaining ff ramobj <- as.ram(ffobj); close(ffobj) # some operations purely in RAM ffobj <- as.ff(ramobj, overwrite=TRUE) # variant using update ramobj <- as.ram(ffobj) update(ffobj, from=ramobj) Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 32
  • 34. Atomic access functions, methods and generics reading single element contiguous vector indexed access with HIP and vw for ram compatibility writing combined reading and writing get.ff set.ff getset.ff read.ff write.ff readwrite.ff [ [<-.(,add=FALSE) swap(,add=FALSE) add(x,i,value) swap.default Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 33
  • 35. HIP optimized disk access Hybrid Index Preprocessing (HIP) ffobj[1:1000000000] will silently submit the index information to as.hi(quote(1:1000000000)) which does the HIP: rather parses than expands index expressions like 1:1000000000 stores index information either plain or as rle-packed index increments (therefore 'hybrid') sorts the index and stores information to restore order Benefits minimized RAM requirements for index information all elements of ff file accessed in strictly increasing position Costs RAM needed for HI may double RAM for plain index (due to re-ordering) RAM needed during HIP may be higher than final index (due to sorting) Currently preprocessing is almost purely in R-code (only critical parts in fast C-code: intisasc, intisdesc, inrle) Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 34
  • 36. Random access to permanent vector of integer: cost of HIP in [.ff and [<-.ff is well-invested compared to naive access in get.ff and set.ff seconds set.ff [<-.ff [<-.ff 0.4 6.478 0.029 set.ff [<-.ff write 100K/512M mmeachflush 44.021 20.374 set.ff [<-.ff 10 0 0 0.00 2 0.02 20 4 30 0.04 6 40 set.ff [<-.ff 0.0 0.2 0.0 0.004 0.325 set.ff write 100K/512K mmeachflush 8 0.038 0.692 0.2 0.4 0.6 0.4 0.2 0.0 0.06 write 100K/512K 0.8 [<-.ff 0.02 0.6 set.ff 0.544 read 100K/512M mmeachflush 50 0.02 0.6 0.374 read 100K/512K mmeachflush 0.8 0.8 read 100K/512K Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 35
  • 37. Parsing of index expressions # The parser knows ‘c()’ and ‘:’, nothing else # [.ff calls as.hi like as.hi(quote(index.expression)) # efficient index expressions a <- 1 b <- 100 as.hi(quote(c(a:b, 100:1000))) # parsed (packed) as.hi(quote(c(1000:100, 100:1))) # parsed and reversed (packed) # neither ascending nor descending sequences as.hi(quote(c(2:10,1))) # parsed, but then expanded and sorted # plus RAM for re-ordering # parsing aborted when finding expressions with length>16 x <- 1:100; as.hi(quote(x)) # x evaluated, then rle-packed as.hi(quote((1:100))) #() stopped here, ok in a[(1:100)] # parsing skipped as.hi(1:100) # index expanded , then rle-packed # parsing and packing skipped as.hi(1:100, pack=FALSE) # index expanded as.hi(quote(1:100), pack=FALSE) # index expanded Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 36
  • 38. RAM considerations # ff is currently limited to length(ff)==.Machine$max.integer # storing 370 MB integer data > a <- ff(0L, dim=c(1000000,100)) # obviously 370 MB for return value b <- a[] # zero RAM for index or recycling a[] <- 1 # thanks to recycling in C a[] <- 0:1 a[1:100000000] <- 0:1 # thanks to HIP a[100000000:1] <- 1:0 # Attention: 370 MB for recycled value a[, bydim=c(2,1)] <- 0:1 # don't do this a[offset+(1:100000000)] <- 1 # better: a[(o+1):(o+n)] <- 1 # 5x 370MB during HIP a[sample(100000000)] <- 1 a[sample(100000000)] <- 0:1 # Without chunking final costs are # 370 MB index + 370 MB re-order # dito + 370 MB recycling Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 37
  • 39. Lessons from RAM investigation rle() requires up to 9x its input RAM* without using structure() reduces to 7x RAM intrle() uses an optimized C version, needs up to 2x RAM and is by factor 50 faster. Trick: intrle returns NULL if compression achieved is worse than 33.3%. Thus the RAM needed is maximal - 1/1 for the input vector - 1/3 for collecting values - 1/3 for collecting lengths - 1/3 buffer for copying to return value * Measured in version 2.6.2 Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts 38
  • 40. A physical to virtual hierarchy R.ff: cubelet processing virtual window virtual dimension virtual / physical physical vw(ff)<dim() length() the offset, window and rest components of vw must match current length and dim dim(ff) <- c(3,4) dim must match current length dimorder(ff) <- c(2,1) length(ff) <- 12 ff <- ff(length=16) maxlength(ff)==16 # readonly Source: Oehlschlägel (2010) Managing large datasets in R – ff examples and concepts physical length change physical file size 39