SlideShare una empresa de Scribd logo
1 de 139
Descargar para leer sin conexión
LANGUAGES:
A JOURNEY
@akitaonrails
LANGUAGES:
A JOURNEY
RANCHO DEV 2017
@akitaonrails
@akitaonrails
www.theconf.club
A Journey through New Languages - Rancho Dev 2017
Languages Syntax are EASY
Architectures (PATTERNS) are HARD
A Journey through New Languages - Rancho Dev 2017
A Journey through New Languages - Rancho Dev 2017
A Journey through New Languages - Rancho Dev 2017
A Journey through New Languages - Rancho Dev 2017
A Journey through New Languages - Rancho Dev 2017
A Journey through New Languages - Rancho Dev 2017
A Journey through New Languages - Rancho Dev 2017
A Journey through New Languages - Rancho Dev 2017
git checkout -b old_version remotes/origin/old_version
time bin/manga-downloadr -t
#!/usr/bin/env	ruby	
$LOAD_PATH.unshift	File.join(File.dirname(__FILE__),	'..',	'lib')	
require	'optparse'	
options	=	{	test:	false	}	
option_parser	=	OptionParser.new	do	|opts|	
		opts.banner	=	"Usage:	manga-downloadr	[options]"	
		opts.on("-t",	"--test",	"Test	routine")	do	|t|	
				options[:url]	=	"http://www.mangareader.net/onepunch-man"	
				options[:name]	=	"one-punch-man"	
				options[:directory]	=	"/tmp/manga-downloadr/one-punch-man"	
				options[:test]	=	true	
		end	
		opts.on("-u	URL",	"--url	URL",	
				"Full	MangaReader.net	manga	homepage	URL	-	required")	do	|v|	
				options[:url]	=	v	
		end	
		opts.on("-n	NAME",	"--name	NAME",	
				"slug	to	be	used	for	the	sub-folder	to	store	all	manga	files	-	required")	do	|n|	
				options[:name]	=	n	
		end	
		opts.on("-d	DIRECTORY",	"--directory	DIRECTORY",	
				"main	folder	where	all	mangas	will	be	stored	-	required")	do	|d|	
				options[:directory]	=	d	
		end	
		opts.on("-h",	"--help",	"Show	this	message")	do	
				puts	opts	
				exit	
		end	
end
require	'manga-downloadr'	
generator	=	MangaDownloadr::Workflow.create(options[:url],	options[:name],	
options[:directory])	
		generator.fetch_chapter_urls!	
		generator.fetch_page_urls!	
		generator.fetch_image_urls!	
		generator.fetch_images!	
		generator.compile_ebooks!
require	'manga-downloadr'	
generator	=	MangaDownloadr::Workflow.create(options[:url],	options[:name],	
options[:directory])	
		puts	"Massive	parallel	scanning	of	all	chapters	"	
		generator.fetch_chapter_urls!	
		puts	"nMassive	parallel	scanning	of	all	pages	"	
		generator.fetch_page_urls!	
		puts	"nMassive	parallel	scanning	of	all	images	"	
		generator.fetch_image_urls!	
		puts	"nTotal	page	links	found:	#{generator.chapter_pages_count}"	
		puts	"nMassive	parallel	download	of	all	page	images	"	
		generator.fetch_images!	
		puts	"nCompiling	all	images	into	PDF	volumes	"	
		generator.compile_ebooks!	
puts	"nProcess	finished."
require	'manga-downloadr'	
generator	=	MangaDownloadr::Workflow.create(options[:url],	options[:name],	
options[:directory])	
unless	generator.state?(:chapter_urls)	
		puts	"Massive	parallel	scanning	of	all	chapters	"	
		generator.fetch_chapter_urls!	
end	
unless	generator.state?(:page_urls)	
		puts	"nMassive	parallel	scanning	of	all	pages	"	
		generator.fetch_page_urls!	
end	
unless	generator.state?(:image_urls)	
		puts	"nMassive	parallel	scanning	of	all	images	"	
		generator.fetch_image_urls!	
		puts	"nTotal	page	links	found:	#{generator.chapter_pages_count}"	
end	
unless	generator.state?(:images)	
		puts	"nMassive	parallel	download	of	all	page	images	"	
		generator.fetch_images!	
end	
unless	options[:test]	
		puts	"nCompiling	all	images	into	PDF	volumes	"	
		generator.compile_ebooks!	
end	
puts	"nProcess	finished."
MangaDownloadr::Workflow
MangaDownloadr::Workflowmodule	MangaDownloadr	
		ImageData	=	Struct.new(:folder,	:filename,	:url)	
		class	Workflow	
				def	initialize(root_url	=	nil,	manga_name	=	nil,	manga_root	=	nil,	options	=	{})	
				end	
				def	fetch_chapter_urls!	
				end	
				def	fetch_page_urls!	
				end	
				def	fetch_image_urls!	
				end	
				def	fetch_images!	
				end	
				def	compile_ebooks!	
				end	
				def	state?(state)	
				end	
				private	
				def	current_state(state)	
				end	
		end	
end
fetch_chapter_urls!
module	MangaDownloadr	
		ImageData	=	Struct.new(:folder,	:filename,	:url)	
		class	Workflow	
				def	initialize(root_url	=	nil,	manga_name	=	nil,	manga_root	=	nil,	options	=	{})	
				end	
				def	fetch_chapter_urls!	
				end	
				def	fetch_page_urls!	
				end	
				def	fetch_image_urls!	
				end	
				def	fetch_images!	
				end	
				def	compile_ebooks!	
				end	
				def	state?(state)	
				end	
				private	
				def	current_state(state)	
				end	
		end	
end
fetch_chapter_urls!
fetch_chapter_urls!def	fetch_chapter_urls!	
		doc	=	Nokogiri::HTML(open(manga_root_url))	
		self.chapter_list	=	doc.css("#listing	a").map	{	|l|	l['href']}	
		self.manga_title		=	doc.css("#mangaproperties	h1").first.text	
		current_state	:chapter_urls	
end
fetch_chapter_urls!def	fetch_chapter_urls!	
		doc	=	Nokogiri::HTML(open(manga_root_url))	
		self.chapter_list	=	doc.css("#listing	a").map	{	|l|	l['href']}	
		self.manga_title		=	doc.css("#mangaproperties	h1").first.text	
		current_state	:chapter_urls	
end
def	fetch_page_urls!	
		chapter_list.each	do	|chapter_link|	
						response	=	Typhoeus.get	"http://www.mangareader.net#{chapter_link}"	
										chapter_doc	=	Nokogiri::HTML(response.body)	
										pages	=	chapter_doc.xpath("//div[@id='selectpage']//select[@id='pageMenu']//option")	
										chapter_pages.merge!(chapter_link	=>	pages.map	{	|p|	p['value']	})	
										print	'.'	
		end	
		self.chapter_pages_count	=	chapter_pages.values.inject(0)	{	|total,	list|	total	+=	list.size	}	
		current_state	:page_urls	
end
def	fetch_page_urls!	
		chapter_list.each	do	|chapter_link|	
				begin	
						response	=	Typhoeus.get	"http://www.mangareader.net#{chapter_link}"	
								begin	
										chapter_doc	=	Nokogiri::HTML(response.body)	
										pages	=	chapter_doc.xpath("//div[@id='selectpage']//select[@id='pageMenu']//option")	
										chapter_pages.merge!(chapter_link	=>	pages.map	{	|p|	p['value']	})	
										print	'.'	
								rescue	=>	e	
										self.fetch_page_urls_errors	<<	{	url:	chapter_link,	error:	e,	body:	response.body	}	
										print	'x'	
								end	
						end	
				rescue	=>	e	
						puts	e	
				end	
		end	
		unless	fetch_page_urls_errors.empty?	
				puts	"n	Errors	fetching	page	urls:"	
				puts	fetch_page_urls_errors	
		end	
		self.chapter_pages_count	=	chapter_pages.values.inject(0)	{	|total,	list|	total	+=	list.size	}	
		current_state	:page_urls	
end
def	fetch_page_urls!	
		hydra	=	Typhoeus::Hydra.new(max_concurrency:	hydra_concurrency)	
		chapter_list.each	do	|chapter_link|	
				begin	
						request	=	Typhoeus::Request.new	"http://www.mangareader.net#{chapter_link}"	
						request.on_complete	do	|response|	
								begin	
										chapter_doc	=	Nokogiri::HTML(response.body)	
										pages	=	chapter_doc.xpath("//div[@id='selectpage']//select[@id='pageMenu']//option")	
										chapter_pages.merge!(chapter_link	=>	pages.map	{	|p|	p['value']	})	
										print	'.'	
								rescue	=>	e	
										self.fetch_page_urls_errors	<<	{	url:	chapter_link,	error:	e,	body:	response.body	}	
										print	'x'	
								end	
						end	
						hydra.queue	request	
				rescue	=>	e	
						puts	e	
				end	
		end	
		hydra.run	
		unless	fetch_page_urls_errors.empty?	
				puts	"n	Errors	fetching	page	urls:"	
				puts	fetch_page_urls_errors	
		end	
		self.chapter_pages_count	=	chapter_pages.values.inject(0)	{	|total,	list|	total	+=	list.size	}	
		current_state	:page_urls	
end
def	fetch_page_urls!	
		hydra	=	Typhoeus::Hydra.new(max_concurrency:	hydra_concurrency)	
		chapter_list.each	do	|chapter_link|	
				begin	
						request	=	Typhoeus::Request.new	"http://www.mangareader.net#{chapter_link}"	
						request.on_complete	do	|response|	
								begin	
										chapter_doc	=	Nokogiri::HTML(response.body)	
										pages	=	chapter_doc.xpath("//div[@id='selectpage']//select[@id='pageMenu']//option")	
										chapter_pages.merge!(chapter_link	=>	pages.map	{	|p|	p['value']	})	
										print	'.'	
								rescue	=>	e	
										self.fetch_page_urls_errors	<<	{	url:	chapter_link,	error:	e,	body:	response.body	}	
										print	'x'	
								end	
						end	
						hydra.queue	request	
				rescue	=>	e	
						puts	e	
				end	
		end	
		hydra.run	
		unless	fetch_page_urls_errors.empty?	
				puts	"n	Errors	fetching	page	urls:"	
				puts	fetch_page_urls_errors	
		end	
		self.chapter_pages_count	=	chapter_pages.values.inject(0)	{	|total,	list|	total	+=	list.size	}	
		current_state	:page_urls	
end
def	fetch_page_urls!	
		hydra	=	Typhoeus::Hydra.new(max_concurrency:	hydra_concurrency)	
		chapter_list.each	do	|chapter_link|	
				begin	
						request	=	Typhoeus::Request.new	"http://www.mangareader.net#{chapter_link}"	
						request.on_complete	do	|response|	
								begin	
										chapter_doc	=	Nokogiri::HTML(response.body)	
										pages	=	chapter_doc.xpath("//div[@id='selectpage']//select[@id='pageMenu']//option")	
										chapter_pages.merge!(chapter_link	=>	pages.map	{	|p|	p['value']	})	
										print	'.'	
								rescue	=>	e	
										self.fetch_page_urls_errors	<<	{	url:	chapter_link,	error:	e,	body:	response.body	}	
										print	'x'	
								end	
						end	
						hydra.queue	request	
				rescue	=>	e	
						puts	e	
				end	
		end	
		hydra.run	
		unless	fetch_page_urls_errors.empty?	
				puts	"n	Errors	fetching	page	urls:"	
				puts	fetch_page_urls_errors	
		end	
		self.chapter_pages_count	=	chapter_pages.values.inject(0)	{	|total,	list|	total	+=	list.size	}	
		current_state	:page_urls	
end
def	fetch_page_urls!	
		hydra	=	Typhoeus::Hydra.new(max_concurrency:	hydra_concurrency)	
		chapter_list.each	do	|chapter_link|	
				begin	
						request	=	Typhoeus::Request.new	"http://www.mangareader.net#{chapter_link}"	
						request.on_complete	do	|response|	
								begin	
										chapter_doc	=	Nokogiri::HTML(response.body)	
										pages	=	chapter_doc.xpath("//div[@id='selectpage']//select[@id='pageMenu']//option")	
										chapter_pages.merge!(chapter_link	=>	pages.map	{	|p|	p['value']	})	
										print	'.'	
								rescue	=>	e	
										self.fetch_page_urls_errors	<<	{	url:	chapter_link,	error:	e,	body:	response.body	}	
										print	'x'	
								end	
						end	
						hydra.queue	request	
				rescue	=>	e	
						puts	e	
				end	
		end	
		hydra.run	
		unless	fetch_page_urls_errors.empty?	
				puts	"n	Errors	fetching	page	urls:"	
				puts	fetch_page_urls_errors	
		end	
		self.chapter_pages_count	=	chapter_pages.values.inject(0)	{	|total,	list|	total	+=	list.size	}	
		current_state	:page_urls	
end
def	fetch_page_urls!	
		hydra	=	Typhoeus::Hydra.new(max_concurrency:	hydra_concurrency)	
		chapter_list.each	do	|chapter_link|	
				begin	
						request	=	Typhoeus::Request.new	"http://www.mangareader.net#{chapter_link}"	
						request.on_complete	do	|response|	
								begin	
										chapter_doc	=	Nokogiri::HTML(response.body)	
										pages	=	chapter_doc.xpath("//div[@id='selectpage']//select[@id='pageMenu']//option")	
										chapter_pages.merge!(chapter_link	=>	pages.map	{	|p|	p['value']	})	
										print	'.'	
								rescue	=>	e	
										self.fetch_page_urls_errors	<<	{	url:	chapter_link,	error:	e,	body:	response.body	}	
										print	'x'	
								end	
						end	
						hydra.queue	request	
				rescue	=>	e	
						puts	e	
				end	
		end	
		hydra.run	
		unless	fetch_page_urls_errors.empty?	
				puts	"n	Errors	fetching	page	urls:"	
				puts	fetch_page_urls_errors	
		end	
		self.chapter_pages_count	=	chapter_pages.values.inject(0)	{	|total,	list|	total	+=	list.size	}	
		current_state	:page_urls	
end
def	fetch_page_urls!	
		hydra	=	Typhoeus::Hydra.new(max_concurrency:	hydra_concurrency)	
		chapter_list.each	do	|chapter_link|	
				begin	
						request	=	Typhoeus::Request.new	"http://www.mangareader.net#{chapter_link}"	
						request.on_complete	do	|response|	
								begin	
										chapter_doc	=	Nokogiri::HTML(response.body)	
										pages	=	chapter_doc.xpath("//div[@id='selectpage']//select[@id='pageMenu']//option")	
										chapter_pages.merge!(chapter_link	=>	pages.map	{	|p|	p['value']	})	
										print	'.'	
								rescue	=>	e	
										self.fetch_page_urls_errors	<<	{	url:	chapter_link,	error:	e,	body:	response.body	}	
										print	'x'	
								end	
						end	
						hydra.queue	request	
				rescue	=>	e	
						puts	e	
				end	
		end	
		hydra.run	
		unless	fetch_page_urls_errors.empty?	
				puts	"n	Errors	fetching	page	urls:"	
				puts	fetch_page_urls_errors	
		end	
		self.chapter_pages_count	=	chapter_pages.values.inject(0)	{	|total,	list|	total	+=	list.size	}	
		current_state	:page_urls	
end
def	fetch_page_urls!	
		hydra	=	Typhoeus::Hydra.new(max_concurrency:	hydra_concurrency)	
		chapter_list.each	do	|chapter_link|	
				begin	
						request	=	Typhoeus::Request.new	"http://www.mangareader.net#{chapter_link}"	
						request.on_complete	do	|response|	
								begin	
										chapter_doc	=	Nokogiri::HTML(response.body)	
										pages	=	chapter_doc.xpath("//div[@id='selectpage']//select[@id='pageMenu']//option")	
										chapter_pages.merge!(chapter_link	=>	pages.map	{	|p|	p['value']	})	
										print	'.'	
								rescue	=>	e	
										self.fetch_page_urls_errors	<<	{	url:	chapter_link,	error:	e,	body:	response.body	}	
										print	'x'	
								end	
						end	
						hydra.queue	request	
				rescue	=>	e	
						puts	e	
				end	
		end	
		hydra.run	
		unless	fetch_page_urls_errors.empty?	
				puts	"n	Errors	fetching	page	urls:"	
				puts	fetch_page_urls_errors	
		end	
		self.chapter_pages_count	=	chapter_pages.values.inject(0)	{	|total,	list|	total	+=	list.size	}	
		current_state	:page_urls	
end
def	fetch_page_urls!	
		hydra	=	Typhoeus::Hydra.new(max_concurrency:	hydra_concurrency)	
		chapter_list.each	do	|chapter_link|	
				begin	
						request	=	Typhoeus::Request.new	"http://www.mangareader.net#{chapter_link}"	
						request.on_complete	do	|response|	
								begin	
										chapter_doc	=	Nokogiri::HTML(response.body)	
										pages	=	chapter_doc.xpath("//div[@id='selectpage']//select[@id='pageMenu']//option")	
										chapter_pages.merge!(chapter_link	=>	pages.map	{	|p|	p['value']	})	
										print	'.'	
								rescue	=>	e	
										self.fetch_page_urls_errors	<<	{	url:	chapter_link,	error:	e,	body:	response.body	}	
										print	'x'	
								end	
						end	
						hydra.queue	request	
				rescue	=>	e	
						puts	e	
				end	
		end	
		hydra.run	
		unless	fetch_page_urls_errors.empty?	
				puts	"n	Errors	fetching	page	urls:"	
				puts	fetch_page_urls_errors	
		end	
		self.chapter_pages_count	=	chapter_pages.values.inject(0)	{	|total,	list|	total	+=	list.size	}	
		current_state	:page_urls	
end
def	fetch_page_urls!	
		hydra	=	Typhoeus::Hydra.new(max_con
		chapter_list.each	do	|chapter_link|
				begin	
						request	=	Typhoeus::Request.new
						request.on_complete	do	|respons
								begin	
										chapter_doc	=	Nokogiri::HTM
										pages	=	chapter_doc.xpath("
										chapter_pages.merge!(chapte
										print	'.'	
								rescue	=>	e	
										self.fetch_page_urls_errors
										print	'x'	
								end	
						end	
						hydra.queue	request	
				rescue	=>	e	
						puts	e	
				end	
		end	
		hydra.run	
		unless	fetch_page_urls_errors.empty
				puts	"n	Errors	fetching	page	url
				puts	fetch_page_urls_errors	
		end	
		self.chapter_pages_count	=	chapter_
		current_state	:page_urls	
end
def	fetch_image_urls!	
		hydra	=	Typhoeus::Hydra.new(max_concurrency:	hydra_concurrency)	
		chapter_list.each	do	|chapter_key|	
				chapter_pages[chapter_key].each	do	|page_link|	
						begin	
								request	=	Typhoeus::Request.new	"http://www.mangareader.net#{page_link}"	
								request.on_complete	do	|response|	
										begin	
												chapter_doc	=	Nokogiri::HTML(response.body)	
												image							=	chapter_doc.css('#img').first	
												tokens						=	image['alt'].match("^(.*?)s-s(.*?)$")	
												extension			=	File.extname(URI.parse(image['src']).path)	
												chapter_images.merge!(chapter_key	=>	[])	if	chapter_images[chapter_key].nil?	
												chapter_images[chapter_key]	<<	ImageData.new(	tokens[1],	"#{tokens[2]}#{extension}",	image['src']	)	
												print	'.'	
										rescue	=>	e	
												self.fetch_image_urls_errors	<<	{	url:	page_link,	error:	e	}	
												print	'x'	
										end	
								end	
								hydra.queue	request	
						rescue	=>	e	
								puts	e	
						end	
				end	
		end	
		hydra.run	
		unless	fetch_image_urls_errors.empty?	
				puts	"nErrors	fetching	image	urls:"	
				puts	fetch_image_urls_errors	
		end	
		current_state	:image_urls	
end
def	fetch_image_urls!	
		hydra	=	Typhoeus::Hydra.new(max_concurrency:	hydra_concurrency)	
		chapter_list.each	do	|chapter_key|	
				chapter_pages[chapter_key].each	do	|page_link|	
						begin	
								request	=	Typhoeus::Request.new	"http://www.mangareader.net#{page_link}"	
								request.on_complete	do	|response|	
										begin	
												chapter_doc	=	Nokogiri::HTML(response.body)	
												image							=	chapter_doc.css('#img').first	
												tokens						=	image['alt'].match("^(.*?)s-s(.*?)$")	
												extension			=	File.extname(URI.parse(image['src']).path)	
												chapter_images.merge!(chapter_key	=>	[])	if	chapter_images[chapter_key].nil?	
												chapter_images[chapter_key]	<<	ImageData.new(	tokens[1],	"#{tokens[2]}#{extension}",	image['src']	)	
												print	'.'	
										rescue	=>	e	
												self.fetch_image_urls_errors	<<	{	url:	page_link,	error:	e	}	
												print	'x'	
										end	
								end	
								hydra.queue	request	
						rescue	=>	e	
								puts	e	
						end	
				end	
		end	
		hydra.run	
		unless	fetch_image_urls_errors.empty?	
				puts	"nErrors	fetching	image	urls:"	
				puts	fetch_image_urls_errors	
		end	
		current_state	:image_urls	
end
def	fetch_images!	
		hydra	=	Typhoeus::Hydra.new(max_concurrency:	hydra_concurrency)	
		chapter_list.each_with_index	do	|chapter_key,	chapter_index|	
				chapter_images[chapter_key].each	do	|file|	
								downloaded_filename	=	File.join(manga_root_folder,	file.folder,	file.filename)	
								next	if	File.exists?(downloaded_filename)	#	effectively	resumes	the	download	list	without	re-downloading	eve
								request	=	Typhoeus::Request.new	file.url	
								request.on_complete	do	|response|	
										begin	
												#	download	
												FileUtils.mkdir_p(File.join(manga_root_folder,	file.folder))	
												File.open(downloaded_filename,	"wb+")	{	|f|	f.write	response.body	}	
												unless	is_test	
														#	resize	
														image	=	Magick::Image.read(	downloaded_filename	).first	
														resized	=	image.resize_to_fit(600,	800)	
														resized.write(	downloaded_filename	)	{	self.quality	=	50	}	
														GC.start	#	to	avoid	a	leak	too	big	(ImageMagick	is	notorious	for	that,	specially	on	resizes)	
												end	
												print	'.'	
										rescue	=>	e	
												self.fetch_images_errors	<<	{	url:	file.url,	error:	e	}	
												print	'#'	
										end	
								end	
						hydra.queue	request	
				end	
		end	
		hydra.run	
		unless	fetch_images_errors.empty?	
				puts	"nErrors	downloading	images:"	
				puts	fetch_images_errors	
		end	
		current_state	:images	
end
def	compile_ebooks!	
		folders	=	Dir[manga_root_folder	+	"/*/"].sort_by	{	|element|	ary	=	element.split("	").last.to_i	}	
		self.download_links	=	folders.inject([])	do	|list,	folder|	
				list	+=	Dir[folder	+	"*.*"].sort_by	{	|element|	ary	=	element.split("	").last.to_i	}	
		end	
		#	concatenating	PDF	files	(250	pages	per	volume)	
		chapter_number	=	0	
		while	!download_links.empty?	
				chapter_number	+=	1	
				pdf_file	=	File.join(manga_root_folder,	"#{manga_title}	#{chapter_number}.pdf")	
				list	=	download_links.slice!(0..pages_per_volume)	
				Prawn::Document.generate(pdf_file,	page_size:	page_size)	do	|pdf|	
						list.each	do	|image_file|	
								begin	
										pdf.image	image_file,	position:	:center,	vposition:	:center	
								rescue	=>	e	
										puts	"Error	in	#{image_file}	-	#{e}"	
								end	
						end	
				end	
				print	'.'	
		end	
		current_state	:ebooks	
end
time bin/manga-downloadr -t
17.18s user 17.62s system 41% cpu 1:24.04 total
time bin/manga-downloadr -t
17.18s user 17.62s system 41% cpu 1:24.04 total
A Journey through New Languages - Rancho Dev 2017
A Journey through New Languages - Rancho Dev 2017
A Journey through New Languages - Rancho Dev 2017
.
!"" _build
# $"" ...
!"" config
# $"" config.exs
!"" deps
# !"" ...
!"" ex_manga_downloadr
!"" lib
# !"" ex_manga_downloadr
# # !"" cli.ex
# # !"" mangafox
# # # !"" chapter_page.ex
# # # !"" index_page.ex
# # # $"" page.ex
# # !"" mangareader
# # # !"" chapter_page.ex
# # # !"" index_page.ex
# # # $"" page.ex
# # !"" manga_wrapper.ex
# # $"" workflow.ex
# $"" ex_manga_downloadr.ex
!"" mix.exs
!"" mix.lock
!"" README.md
$"" test
!"" ex_manga_downloadr
# !"" mangafox_test.exs
# $"" mangareader_test.exs
!"" ex_manga_downloadr_test.exs
$"" test_helper.exs
61 directories, 281 files
mix.exs
mix.exsdefmodule	ExMangaDownloadr.Mixfile	do	
		use	Mix.Project	
		def	project	do	
				[app:	:ex_manga_downloadr,	
					version:	"1.0.2",	
					elixir:	"~>	1.4",	
					build_embedded:	Mix.env	==	:prod,	
					start_permanent:	Mix.env	==	:prod,	
					escript:	[main_module:	ExMangaDownloadr.CLI],	
					deps:	deps()]	
		end	
		def	application	do	
				[applications:	[:logger,	:httpoison,	:porcelain,	:observer]]	
		end	
		defp	deps	do	
				[	
						{:httpoison,	"~>	0.11"},	
						{:floki,	"~>	0.17"},	
						{:porcelain,	"~>	2.0.3"},	
						{:mock,	"~>	0.2",	only:	:test}	
				]	
		end	
end
Mixfile
MixfilePoolManagement
.
!"" _build
# $"" ...
!"" config
# $"" config.exs
!"" deps
# !"" ...
!"" ex_manga_downloadr
!"" lib
# !"" ex_manga_downloadr
# # !"" cli.ex
# # !"" mangafox
# # # !"" chapter_page.ex
# # # !"" index_page.ex
# # # $"" page.ex
# # !"" mangareader
# # # !"" chapter_page.ex
# # # !"" index_page.ex
# # # $"" page.ex
# # !"" manga_wrapper.ex
# # $"" workflow.ex
# $"" ex_manga_downloadr.ex
!"" mix.exs
!"" mix.lock
!"" README.md
$"" test
!"" ex_manga_downloadr
# !"" mangafox_test.exs
# $"" mangareader_test.exs
!"" ex_manga_downloadr_test.exs
$"" test_helper.exs
61 directories, 281 files
workflow.ex
Mixfile
.
!"" _build
# $"" ...
!"" config
# $"" config.exs
!"" deps
# !"" ...
!"" ex_manga_downloadr
!"" lib
# !"" ex_manga_downloadr
# # !"" cli.ex
# # !"" mangafox
# # # !"" chapter_page.ex
# # # !"" index_page.ex
# # # $"" page.ex
# # !"" mangareader
# # # !"" chapter_page.ex
# # # !"" index_page.ex
# # # $"" page.ex
# # !"" manga_wrapper.ex
# # $"" workflow.ex
# $"" ex_manga_downloadr.ex
!"" mix.exs
!"" mix.lock
!"" README.md
$"" test
!"" ex_manga_downloadr
# !"" mangafox_test.exs
# $"" mangareader_test.exs
!"" ex_manga_downloadr_test.exs
$"" test_helper.exs
61 directories, 281 files
workflow.ex
workflow.exdefmodule	ExMangaDownloadr.Workflow	do	
		def	determine_source(url)	do	
		end	
		def	chapters({url,	source})	do	
				{:ok,	{_manga_title,	chapter_list}}	=	MangaWrapper.index_page(url,	source)	
				{chapter_list,	source}	
		end	
		def	pages({chapter_list,	source})	do	
				pages_list	=	chapter_list	
						|>	Task.async_stream(MangaWrapper,	:chapter_page,	[source],	max_concurrency:	@max_demand)	
						|>	Enum.to_list()	
						|>	Enum.reduce([],	fn	{:ok,	{:ok,	list}},	acc	->	acc	++	list	end)	
				{pages_list,	source}	
		end	
		def	images_sources({pages_list,	source})	do	
				pages_list	
						|>	Task.async_stream(MangaWrapper,	:page_image,	[source],	max_concurrency:	@max_demand)	
						|>	Enum.to_list()	
						|>	Enum.map(fn	{:ok,	{:ok,	image}}	->	image	end)	
		end	
		def	process_downloads(images_list,	directory)	do	
				images_list	
						|>	Task.async_stream(MangaWrapper,	:page_download_image,	[directory],	
									max_concurrency:	@max_demand	/	2,	timeout:	@download_timeout)	
						|>	Enum.to_list()	
				directory	
		end	
		def	optimize_images(directory)	do	…	end	
		def	compile_pdfs(directory,	manga_name)	do	…	end	
		defp	compile_volume(manga_name,	directory,	{chunk,	index})	do	…	end	
		defp	prepare_volume(manga_name,	directory,	chunk,	index)	do	…	end	
		defp	chunk(collection,	default_size)	do	…	end	
end	
:chapter_page
workflow.exdefmodule	ExMangaDownloadr.Workflow	do	
		def	determine_source(url)	do	
		end	
		def	chapters({url,	source})	do	
				{:ok,	{_manga_title,	chapter_list}}	=	MangaWrapper.index_page(url,	source)	
				{chapter_list,	source}	
		end	
		def	pages({chapter_list,	source})	do	
				pages_list	=	chapter_list	
						|>	Task.async_stream(MangaWrapper,	:chapter_page,	[source],	max_concurrency:	@max_demand)	
						|>	Enum.to_list()	
						|>	Enum.reduce([],	fn	{:ok,	{:ok,	list}},	acc	->	acc	++	list	end)	
				{pages_list,	source}	
		end	
		def	images_sources({pages_list,	source})	do	
				pages_list	
						|>	Task.async_stream(MangaWrapper,	:page_image,	[source],	max_concurrency:	@max_demand)	
						|>	Enum.to_list()	
						|>	Enum.map(fn	{:ok,	{:ok,	image}}	->	image	end)	
		end	
		def	process_downloads(images_list,	directory)	do	
				images_list	
						|>	Task.async_stream(MangaWrapper,	:page_download_image,	[directory],	
									max_concurrency:	@max_demand	/	2,	timeout:	@download_timeout)	
						|>	Enum.to_list()	
				directory	
		end	
		def	optimize_images(directory)	do	…	end	
		def	compile_pdfs(directory,	manga_name)	do	…	end	
		defp	compile_volume(manga_name,	directory,	{chunk,	index})	do	…	end	
		defp	prepare_volume(manga_name,	directory,	chunk,	index)	do	…	end	
		defp	chunk(collection,	default_size)	do	…	end	
end	
:chapter_page
workflow.exdefmodule	ExMangaDownloadr.Workflow	do	
		def	determine_source(url)	do	
		end	
		def	chapters({url,	source})	do	
				{:ok,	{_manga_title,	chapter_list}}	=	MangaWrapper.index_page(url,	source)	
				{chapter_list,	source}	
		end	
		def	pages({chapter_list,	source})	do	
				pages_list	=	chapter_list	
						|>	Task.async_stream(MangaWrapper,	:chapter_page,	[source],	max_concurrency:	@max_demand)	
						|>	Enum.to_list()	
						|>	Enum.reduce([],	fn	{:ok,	{:ok,	list}},	acc	->	acc	++	list	end)	
				{pages_list,	source}	
		end	
		def	images_sources({pages_list,	source})	do	
				pages_list	
						|>	Task.async_stream(MangaWrapper,	:page_image,	[source],	max_concurrency:	@max_demand)	
						|>	Enum.to_list()	
						|>	Enum.map(fn	{:ok,	{:ok,	image}}	->	image	end)	
		end	
		def	process_downloads(images_list,	directory)	do	
				images_list	
						|>	Task.async_stream(MangaWrapper,	:page_download_image,	[directory],	
									max_concurrency:	@max_demand	/	2,	timeout:	@download_timeout)	
						|>	Enum.to_list()	
				directory	
		end	
		def	optimize_images(directory)	do	…	end	
		def	compile_pdfs(directory,	manga_name)	do	…	end	
		defp	compile_volume(manga_name,	directory,	{chunk,	index})	do	…	end	
		defp	prepare_volume(manga_name,	directory,	chunk,	index)	do	…	end	
		defp	chunk(collection,	default_size)	do	…	end	
end	
:chapter_page
workflow.exdefmodule	ExMangaDownloadr.Workflow	do	
		def	determine_source(url)	do	
		end	
		def	chapters({url,	source})	do	
				{:ok,	{_manga_title,	chapter_list}}	=	MangaWrapper.index_page(url,	source)	
				{chapter_list,	source}	
		end	
		def	pages({chapter_list,	source})	do	
				pages_list	=	chapter_list	
						|>	Task.async_stream(MangaWrapper,	:chapter_page,	[source],	max_concurrency:	@max_demand)	
						|>	Enum.to_list()	
						|>	Enum.reduce([],	fn	{:ok,	{:ok,	list}},	acc	->	acc	++	list	end)	
				{pages_list,	source}	
		end	
		def	images_sources({pages_list,	source})	do	
				pages_list	
						|>	Task.async_stream(MangaWrapper,	:page_image,	[source],	max_concurrency:	@max_demand)	
						|>	Enum.to_list()	
						|>	Enum.map(fn	{:ok,	{:ok,	image}}	->	image	end)	
		end	
		def	process_downloads(images_list,	directory)	do	
				images_list	
						|>	Task.async_stream(MangaWrapper,	:page_download_image,	[directory],	
									max_concurrency:	@max_demand	/	2,	timeout:	@download_timeout)	
						|>	Enum.to_list()	
				directory	
		end	
		def	optimize_images(directory)	do	…	end	
		def	compile_pdfs(directory,	manga_name)	do	…	end	
		defp	compile_volume(manga_name,	directory,	{chunk,	index})	do	…	end	
		defp	prepare_volume(manga_name,	directory,	chunk,	index)	do	…	end	
		defp	chunk(collection,	default_size)	do	…	end	
end	
:chapter_page
A Journey through New Languages - Rancho Dev 2017
POOL
workflow.exdefmodule	ExMangaDownloadr.Workflow	do	
		def	determine_source(url)	do	
		end	
		def	chapters({url,	source})	do	
				{:ok,	{_manga_title,	chapter_list}}	=	MangaWrapper.index_page(url,	source)	
				{chapter_list,	source}	
		end	
		def	pages({chapter_list,	source})	do	
				pages_list	=	chapter_list	
						|>	Task.async_stream(MangaWrapper,	:chapter_page,	[source],	max_concurrency:	@max_demand)	
						|>	Enum.to_list()	
						|>	Enum.reduce([],	fn	{:ok,	{:ok,	list}},	acc	->	acc	++	list	end)	
				{pages_list,	source}	
		end	
		def	images_sources({pages_list,	source})	do	
				pages_list	
						|>	Task.async_stream(MangaWrapper,	:page_image,	[source],	max_concurrency:	@max_demand)	
						|>	Enum.to_list()	
						|>	Enum.map(fn	{:ok,	{:ok,	image}}	->	image	end)	
		end	
		def	process_downloads(images_list,	directory)	do	
				images_list	
						|>	Task.async_stream(MangaWrapper,	:page_download_image,	[directory],	
									max_concurrency:	@max_demand	/	2,	timeout:	@download_timeout)	
						|>	Enum.to_list()	
				directory	
		end	
		def	optimize_images(directory)	do	…	end	
		def	compile_pdfs(directory,	manga_name)	do	…	end	
		defp	compile_volume(manga_name,	directory,	{chunk,	index})	do	…	end	
		defp	prepare_volume(manga_name,	directory,	chunk,	index)	do	…	end	
		defp	chunk(collection,	default_size)	do	…	end	
end	
:chapter_page
manga_wrapper.exdefmodule	MangaWrapper	do	
		require	Logger	
		def	index_page(url,	source)	do	
			source	
						|>	manga_source("IndexPage")	
						|>	apply(:chapters,	[url])	
		end	
		def	chapter_page(chapter_link,	source)	do	
				source	
						|>	manga_source("ChapterPage")	
						|>	apply(:pages,	[chapter_link])	
		end	
		def	page_image(page_link,	source)	do	
				source	
						|>	manga_source("Page")	
						|>	apply(:image,	[page_link])	
		end	
		def	page_download_image(image_data,	directory)	do	
				download_image(image_data,	directory)	
		end	
	defp	manga_source(source,	module)	do	
				case	source	do	
						"mangareader"	->	:"Elixir.ExMangaDownloadr.MangaReader.#{module}"	
						"mangafox"				->	:"Elixir.ExMangaDownloadr.Mangafox.#{module}"	
				end	
		end	
		defp	download_image({image_src,	image_filename},	directory)	do	
		end	
end	
:chapter_page
ChapterPage
manga_wrapper.exdefmodule	MangaWrapper	do	
		require	Logger	
		def	index_page(url,	source)	do	
			source	
						|>	manga_source("IndexPage")	
						|>	apply(:chapters,	[url])	
		end	
		def	chapter_page(chapter_link,	source)	do	
				source	
						|>	manga_source("ChapterPage")	
						|>	apply(:pages,	[chapter_link])	
		end	
		def	page_image(page_link,	source)	do	
				source	
						|>	manga_source("Page")	
						|>	apply(:image,	[page_link])	
		end	
		def	page_download_image(image_data,	directory)	do	
				download_image(image_data,	directory)	
		end	
	defp	manga_source(source,	module)	do	
				case	source	do	
						"mangareader"	->	:"Elixir.ExMangaDownloadr.MangaReader.#{module}"	
						"mangafox"				->	:"Elixir.ExMangaDownloadr.Mangafox.#{module}"	
				end	
		end	
		defp	download_image({image_src,	image_filename},	directory)	do	
		end	
end	
.
!"" _build
# $"" ...
!"" config
# $"" config.exs
!"" deps
# !"" ...
!"" ex_manga_downloadr
!"" lib
# !"" ex_manga_downloadr
# # !"" cli.ex
# # !"" mangafox
# # # !"" chapter_page.ex
# # # !"" index_page.ex
# # # $"" page.ex
# # !"" mangareader
# # # !"" chapter_page.ex
# # # !"" index_page.ex
# # # $"" page.ex
# # !"" manga_wrapper.ex
# # $"" workflow.ex
# $"" ex_manga_downloadr.ex
!"" mix.exs
!"" mix.lock
!"" README.md
$"" test
!"" ex_manga_downloadr
# !"" mangafox_test.exs
# $"" mangareader_test.exs
!"" ex_manga_downloadr_test.exs
$"" test_helper.exs
61 directories, 281 files
:chapter_page
ChapterPage
manga_wrapper.exdefmodule	MangaWrapper	do	
		require	Logger	
		def	index_page(url,	source)	do	
			source	
						|>	manga_source("IndexPage")	
						|>	apply(:chapters,	[url])	
		end	
		def	chapter_page(chapter_link,	source)	do	
				source	
						|>	manga_source("ChapterPage")	
						|>	apply(:pages,	[chapter_link])	
		end	
		def	page_image(page_link,	source)	do	
				source	
						|>	manga_source("Page")	
						|>	apply(:image,	[page_link])	
		end	
		def	page_download_image(image_data,	directory)	do	
				download_image(image_data,	directory)	
		end	
	defp	manga_source(source,	module)	do	
				case	source	do	
						"mangareader"	->	:"Elixir.ExMangaDownloadr.MangaReader.#{module}"	
						"mangafox"				->	:"Elixir.ExMangaDownloadr.Mangafox.#{module}"	
				end	
		end	
		defp	download_image({image_src,	image_filename},	directory)	do	
		end	
end	
.
!"" _build
# $"" ...
!"" config
# $"" config.exs
!"" deps
# !"" ...
!"" ex_manga_downloadr
!"" lib
# !"" ex_manga_downloadr
# # !"" cli.ex
# # !"" mangafox
# # # !"" chapter_page.ex
# # # !"" index_page.ex
# # # $"" page.ex
# # !"" mangareader
# # # !"" chapter_page.ex
# # # !"" index_page.ex
# # # $"" page.ex
# # !"" manga_wrapper.ex
# # $"" workflow.ex
# $"" ex_manga_downloadr.ex
!"" mix.exs
!"" mix.lock
!"" README.md
$"" test
!"" ex_manga_downloadr
# !"" mangafox_test.exs
# $"" mangareader_test.exs
!"" ex_manga_downloadr_test.exs
$"" test_helper.exs
61 directories, 281 files
:chapter_page
ChapterPage
manga_wrapper.exdefmodule	MangaWrapper	do	
		require	Logger	
		def	index_page(url,	source)	do	
			source	
						|>	manga_source("IndexPage")	
						|>	apply(:chapters,	[url])	
		end	
		def	chapter_page(chapter_link,	source)	do	
				source	
						|>	manga_source("ChapterPage")	
						|>	apply(:pages,	[chapter_link])	
		end	
		def	page_image(page_link,	source)	do	
				source	
						|>	manga_source("Page")	
						|>	apply(:image,	[page_link])	
		end	
		def	page_download_image(image_data,	directory)	do	
				download_image(image_data,	directory)	
		end	
	defp	manga_source(source,	module)	do	
				case	source	do	
						"mangareader"	->	:"Elixir.ExMangaDownloadr.MangaReader.#{module}"	
						"mangafox"				->	:"Elixir.ExMangaDownloadr.Mangafox.#{module}"	
				end	
		end	
		defp	download_image({image_src,	image_filename},	directory)	do	
		end	
end	
.
!"" _build
# $"" ...
!"" config
# $"" config.exs
!"" deps
# !"" ...
!"" ex_manga_downloadr
!"" lib
# !"" ex_manga_downloadr
# # !"" cli.ex
# # !"" mangafox
# # # !"" chapter_page.ex
# # # !"" index_page.ex
# # # $"" page.ex
# # !"" mangareader
# # # !"" chapter_page.ex
# # # !"" index_page.ex
# # # $"" page.ex
# # !"" manga_wrapper.ex
# # $"" workflow.ex
# $"" ex_manga_downloadr.ex
!"" mix.exs
!"" mix.lock
!"" README.md
$"" test
!"" ex_manga_downloadr
# !"" mangafox_test.exs
# $"" mangareader_test.exs
!"" ex_manga_downloadr_test.exs
$"" test_helper.exs
61 directories, 281 files
:chapter_page
ChapterPage
manga_wrapper.exdefmodule	MangaWrapper	do	
		require	Logger	
		def	index_page(url,	source)	do	
			source	
						|>	manga_source("IndexPage")	
						|>	apply(:chapters,	[url])	
		end	
		def	chapter_page(chapter_link,	source)	do	
				source	
						|>	manga_source("ChapterPage")	
						|>	apply(:pages,	[chapter_link])	
		end	
		def	page_image(page_link,	source)	do	
				source	
						|>	manga_source("Page")	
						|>	apply(:image,	[page_link])	
		end	
		def	page_download_image(image_data,	directory)	do	
				download_image(image_data,	directory)	
		end	
	defp	manga_source(source,	module)	do	
				case	source	do	
						"mangareader"	->	:"Elixir.ExMangaDownloadr.MangaReader.#{module}"	
						"mangafox"				->	:"Elixir.ExMangaDownloadr.Mangafox.#{module}"	
				end	
		end	
		defp	download_image({image_src,	image_filename},	directory)	do	
		end	
end	
.
!"" _build
# $"" ...
!"" config
# $"" config.exs
!"" deps
# !"" ...
!"" ex_manga_downloadr
!"" lib
# !"" ex_manga_downloadr
# # !"" cli.ex
# # !"" mangafox
# # # !"" chapter_page.ex
# # # !"" index_page.ex
# # # $"" page.ex
# # !"" mangareader
# # # !"" chapter_page.ex
# # # !"" index_page.ex
# # # $"" page.ex
# # !"" manga_wrapper.ex
# # $"" workflow.ex
# $"" ex_manga_downloadr.ex
!"" mix.exs
!"" mix.lock
!"" README.md
$"" test
!"" ex_manga_downloadr
# !"" mangafox_test.exs
# $"" mangareader_test.exs
!"" ex_manga_downloadr_test.exs
$"" test_helper.exs
61 directories, 281 files
:chapter_page
ChapterPage
defmodule	ExMangaDownloadr.Mangafox.ChapterPage	do	
		require	Logger	
		require	ExMangaDownloadr	
		def	pages(chapter_link)	do	
				ExMangaDownloadr.fetch	chapter_link,	do:	fetch_pages(chapter_link)	
		end	
		defp	fetch_pages(html,	chapter_link)	do	
				[_page|link_template]	=	chapter_link	|>	String.split("/")	|>	
Enum.reverse	
				html	
				|>	Floki.find("div[id='top_center_bar']	option")	
				|>	Floki.attribute("value")	
				|>	Enum.reject(fn	page_number	->	page_number	==	"0"	end)	
				|>	Enum.map(fn	page_number	->		
						["#{page_number}.html"|link_template]	
								|>	Enum.reverse	
								|>	Enum.join("/")	
				end)	
		end	
end
ChapterPage
ChapterPage
.
!"" _build
# $"" ...
!"" config
# $"" config.exs
!"" deps
# !"" ...
!"" ex_manga_downloadr
!"" lib
# !"" ex_manga_downloadr
# # !"" cli.ex
# # !"" mangafox
# # # !"" chapter_page.ex
# # # !"" index_page.ex
# # # $"" page.ex
# # !"" mangareader
# # # !"" chapter_page.ex
# # # !"" index_page.ex
# # # $"" page.ex
# # !"" manga_wrapper.ex
# # $"" workflow.ex
# $"" ex_manga_downloadr.ex
!"" mix.exs
!"" mix.lock
!"" README.md
$"" test
!"" ex_manga_downloadr
# !"" mangafox_test.exs
# $"" mangareader_test.exs
!"" ex_manga_downloadr_test.exs
$"" test_helper.exs
61 directories, 281 files
cli.ex
cli.exdefmodule	ExMangaDownloadr.CLI	do	
		alias	ExMangaDownloadr.Workflow	
		require	ExMangaDownloadr	
		def	main(args)	do	
				args	
				|>	parse_args	
				|>	process	
		end	
		...	
		defp	parse_args(args)	do	
		end	
		defp	process(:help)	do	
		end	
		defp	process(directory,	url)	do	
				File.mkdir_p!(directory)	
				File.mkdir_p!("/tmp/ex_manga_downloadr_cache")	
				manga_name	=	directory	|>	String.split("/")	|>	Enum.reverse	|>	Enum.at(0)	
				url	
						|>	Workflow.determine_source	
						|>	Workflow.chapters	
						|>	Workflow.pages	
						|>	Workflow.images_sources	
						|>	Workflow.process_downloads(directory)	
						|>	Workflow.optimize_images	
						|>	Workflow.compile_pdfs(manga_name)	
						|>	finish_process	
		end	
		defp	process_test(directory,	url)	do	
		end	
		defp	finish_process(directory)	do	
		end	
end
Workflow
mix deps.get
mix test
mix escript.build
mix deps.get
mix test
mix escript.build
ex_manga_downloadr - 4.6M
time ./ex_manga_downloadr —test
time ./ex_manga_downloadr —test
32.03s user 57.97s system 120% cpu 1:14.45 total
A Journey through New Languages - Rancho Dev 2017
A Journey through New Languages - Rancho Dev 2017
A Journey through New Languages - Rancho Dev 2017
A Journey through New Languages - Rancho Dev 2017
.
!"" _build
# $"" ...
!"" config
# $"" config.exs
!"" deps
# !"" ...
!"" ex_manga_downloadr
!"" lib
# !"" ex_manga_downloadr
# # !"" cli.ex
# # !"" mangafox
# # # !"" chapter_page.ex
# # # !"" index_page.ex
# # # $"" page.ex
# # !"" mangareader
# # # !"" chapter_page.ex
# # # !"" index_page.ex
# # # $"" page.ex
# # !"" manga_wrapper.ex
# # $"" workflow.ex
# $"" ex_manga_downloadr.ex
!"" mix.exs
!"" mix.lock
!"" README.md
$"" test
!"" ex_manga_downloadr
# !"" mangafox_test.exs
# $"" mangareader_test.exs
!"" ex_manga_downloadr_test.exs
$"" test_helper.exs
61 directories, 281 files
.
!"" cr_manga_downloadr
!"" libs
# !"" ...
!"" LICENSE
!"" README.md
!"" shard.lock
!"" shard.yml
!"" spec
# !"" cr_manga_downloadr
# # !"" chapters_spec.cr
# # !"" concurrency_spec.cr
# # !"" image_downloader_spec.cr
# # !"" page_image_spec.cr
# # $"" pages_spec.cr
# !"" fixtures
# # !"" ...
# $"" spec_helper.cr
$"" src
!"" cr_manga_downloadr
# !"" chapters.cr
# !"" concurrency.cr
# !"" downloadr_client.cr
# !"" image_downloader.cr
# !"" page_image.cr
# !"" pages.cr
# !"" records.cr
# !"" version.cr
# $"" workflow.cr
$"" cr_manga_downloadr.cr
.
!"" _build
# $"" ...
!"" config
# $"" config.exs
!"" deps
# !"" ...
!"" ex_manga_downloadr
!"" lib
# !"" ex_manga_downloadr
# # !"" cli.ex
# # !"" mangafox
# # # !"" chapter_page.ex
# # # !"" index_page.ex
# # # $"" page.ex
# # !"" mangareader
# # # !"" chapter_page.ex
# # # !"" index_page.ex
# # # $"" page.ex
# # !"" manga_wrapper.ex
# # $"" workflow.ex
# $"" ex_manga_downloadr.ex
!"" mix.exs
!"" mix.lock
!"" README.md
$"" test
!"" ex_manga_downloadr
# !"" mangafox_test.exs
# $"" mangareader_test.exs
!"" ex_manga_downloadr_test.exs
$"" test_helper.exs
61 directories, 281 files
.
!"" cr_manga_downloadr
!"" libs
# !"" ...
!"" LICENSE
!"" README.md
!"" shard.lock
!"" shard.yml
!"" spec
# !"" cr_manga_downloadr
# # !"" chapters_spec.cr
# # !"" concurrency_spec.cr
# # !"" image_downloader_spec.cr
# # !"" page_image_spec.cr
# # $"" pages_spec.cr
# !"" fixtures
# # !"" ...
# $"" spec_helper.cr
$"" src
!"" cr_manga_downloadr
# !"" chapters.cr
# !"" concurrency.cr
# !"" downloadr_client.cr
# !"" image_downloader.cr
# !"" page_image.cr
# !"" pages.cr
# !"" records.cr
# !"" version.cr
# $"" workflow.cr
$"" cr_manga_downloadr.cr
File.mkdir_p!(directory)	
		File.mkdir_p!("/tmp/ex_manga_downloadr_cache")	
		manga_name	=	directory	|>	String.split("/")	|>	Enum.reverse	|>	Enum.at(0)	
		url	
				|>	Workflow.determine_source	
				|>	Workflow.chapters	
				|>	Workflow.pages	
				|>	Workflow.images_sources	
				|>	Workflow.process_downloads(directory)	
				|>	Workflow.optimize_images	
				|>	Workflow.compile_pdfs(manga_name)	
				|>	finish_process	
end
def	run	
		Dir.mkdir_p	@config.download_directory	
		pipe	Steps.fetch_chapters(@config)	
				.>>	Steps.fetch_pages(@config)	
				.>>	Steps.fetch_images(@config)	
				.>>	Steps.download_images(@config)	
				.>>	Steps.optimize_images(@config)	
				.>>	Steps.prepare_volumes(@config)	
				.>>	unwrap	
		puts	"Done!"	
end
		File.mkdir_p!(directory)	
		File.mkdir_p!("/tmp/ex_manga_downloadr_cache")	
		manga_name	=	directory	|>	String.split("/")	|>	Enum.reverse	|>	Enum.at(0)	
		url	
				|>	Workflow.determine_source	
				|>	Workflow.chapters	
				|>	Workflow.pages	
				|>	Workflow.images_sources	
				|>	Workflow.process_downloads(directory)	
				|>	Workflow.optimize_images	
				|>	Workflow.compile_pdfs(manga_name)	
				|>	finish_process	
end
#	1		
y	=	c(b(a))	
#	2	
x	=	b(a)	
y	=	c(x)	
#	Elixir	Pipes	
y	=	a	
				|>	b	
				|>	c	
#	Crystal	Macro	Pipes	
y	=	pipe	a	
				.>>	b	
				.>>	c	
				.>>	unwrap
A Journey through New Languages - Rancho Dev 2017
defmodule	ExMangaDownloadr.MangaReader.IndexPage	do	
		require	Logger	
		require	ExMangaDownloadr	
		def	chapters(manga_root_url)	do	
				ExMangaDownloadr.fetch	manga_root_url,	do:	collect	
		end	
		defp	collect(html)	do	
				{fetch_manga_title(html),	fetch_chapters(html)}	
		end	
		defp	fetch_manga_title(html)	do	
				html	
				|>	Floki.find("#mangaproperties	h1")	
				|>	Floki.text	
		end	
		defp	fetch_chapters(html)	do	
				html	
				|>	Floki.find("#listing	a")	
				|>	Floki.attribute("href")	
		end	
end
defmodule	ExMangaDownloadr.MangaReader.IndexPage	do	
		require	Logger	
		require	ExMangaDownloadr	
		def	chapters(manga_root_url)	do	
				ExMangaDownloadr.fetch	manga_root_url,	do:	collect	
		end	
		defp	collect(html)	do	
				{fetch_manga_title(html),	fetch_chapters(html)}	
		end	
		defp	fetch_manga_title(html)	do	
				html	
				|>	Floki.find("#mangaproperties	h1")	
				|>	Floki.text	
		end	
		defp	fetch_chapters(html)	do	
				html	
				|>	Floki.find("#listing	a")	
				|>	Floki.attribute("href")	
		end	
end
defmodule	ExMangaDownloadr.MangaReader.IndexPage	do	
		require	Logger	
		require	ExMangaDownloadr	
		def	chapters(manga_root_url)	do	
				ExMangaDownloadr.fetch	manga_root_url,	do:	collect	
		end	
		defp	collect(html)	do	
				{fetch_manga_title(html),	fetch_chapters(html)}	
		end	
		defp	fetch_manga_title(html)	do	
				html	
				|>	Floki.find("#mangaproperties	h1")	
				|>	Floki.text	
		end	
		defp	fetch_chapters(html)	do	
				html	
				|>	Floki.find("#listing	a")	
				|>	Floki.attribute("href")	
		end	
end
require	"./downloadr_client"	
require	"xml"	
module	CrMangaDownloadr	
		class	Chapters	<	DownloadrClient	
				def	fetch	
						html	=	get(@config.root_uri).as(XML::Node)	
						nodes	=	html.xpath_nodes(	
									"//table[contains(@id,	'listing')]//td//a/@href")	
						nodes.map	{	|node|	node.text.as(String)	}	
				end	
		end	
end	
DownloadrClient
require	"./downloadr_client"	
require	"xml"	
module	CrMangaDownloadr	
		class	Chapters	<	DownloadrClient	
				def	fetch	
						html	=	get(@config.root_uri).as(XML::Node)	
						nodes	=	html.xpath_nodes(	
									"//table[contains(@id,	'listing')]//td//a/@href")	
						nodes.map	{	|node|	node.text.as(String)	}	
				end	
		end	
end	
DownloadrClient
module	CrMangaDownloadr	
		class	DownloadrClient	
				...	
				def	get(uri	:	String,	binary	=	false)	
						Dir.mkdir_p(@config.cache_directory)	unless	Dir.exists?(@config.cache_directory)	
						cache_path	=	File.join(@config.cache_directory,	cache_filename(uri))	
						while	true	
								begin	
										response	=	if	@cache_http	&&	File.exists?(cache_path)	
												body	=	File.read(cache_path)	
												HTTP::Client::Response.new(200,	body)	
										else	
												@http_client.get(uri,	headers:	HTTP::Headers{	"User-Agent"	=>	CrMangaDownloadr::USER_AGENT	})	
										end	
										case	response.status_code	
										when	301	
												uri	=	response.headers["Location"]	
										when	200	
												if	(	binary	||	@cache_http	)	&&	!File.exists?(cache_path)	
														File.open(cache_path,	"w")	do	|f|	
																f.print	response.body	
														end	
												end	
												if	binary	
														return	cache_path	
												else	
														return	XML.parse_html(response.body)	
												end	
										end	
								rescue	IO::Timeout	
										puts	"Sleeping	over	#{uri}"	
										sleep	1	
								end	
						end	
				end	
		...	
end	
DownloadrClient
module	CrMangaDownloadr	
		class	DownloadrClient	
				...	
				def	get(uri	:	String,	binary	=	false)	
						Dir.mkdir_p(@config.cache_directory)	unless	Dir.exists?(@config.cache_directory)	
						cache_path	=	File.join(@config.cache_directory,	cache_filename(uri))	
						while	true	
								begin	
										response	=	if	@cache_http	&&	File.exists?(cache_path)	
												body	=	File.read(cache_path)	
												HTTP::Client::Response.new(200,	body)	
										else	
												@http_client.get(uri,	headers:	HTTP::Headers{	"User-Agent"	=>	CrMangaDownloadr::USER_AGENT	})	
										end	
										case	response.status_code	
										when	301	
												uri	=	response.headers["Location"]	
										when	200	
												if	(	binary	||	@cache_http	)	&&	!File.exists?(cache_path)	
														File.open(cache_path,	"w")	do	|f|	
																f.print	response.body	
														end	
												end	
												if	binary	
														return	cache_path	
												else	
														return	XML.parse_html(response.body)	
												end	
										end	
								rescue	IO::Timeout	
										puts	"Sleeping	over	#{uri}"	
										sleep	1	
								end	
						end	
				end	
		...	
end	
DownloadrClient
module	CrMangaDownloadr	
		class	DownloadrClient	
				...	
				def	get(uri	:	String,	binary	=	false)	
						Dir.mkdir_p(@config.cache_directory)	unless	Dir.exists?(@config.cache_directory)	
						cache_path	=	File.join(@config.cache_directory,	cache_filename(uri))	
						while	true	
								begin	
										response	=	if	@cache_http	&&	File.exists?(cache_path)	
												body	=	File.read(cache_path)	
												HTTP::Client::Response.new(200,	body)	
										else	
												@http_client.get(uri,	headers:	HTTP::Headers{	"User-Agent"	=>	CrMangaDownloadr::USER_AGENT	})	
										end	
										case	response.status_code	
										when	301	
												uri	=	response.headers["Location"]	
										when	200	
												if	(	binary	||	@cache_http	)	&&	!File.exists?(cache_path)	
														File.open(cache_path,	"w")	do	|f|	
																f.print	response.body	
														end	
												end	
												if	binary	
														return	cache_path	
												else	
														return	XML.parse_html(response.body)	
												end	
										end	
								rescue	IO::Timeout	
										puts	"Sleeping	over	#{uri}"	
										sleep	1	
								end	
						end	
				end	
		...	
end	
DownloadrClient
module	CrMangaDownloadr	
		class	DownloadrClient	
				...	
				def	get(uri	:	String,	binary	=	false)	
						Dir.mkdir_p(@config.cache_directory)	unless	Dir.exists?(@config.cache_directory)	
						cache_path	=	File.join(@config.cache_directory,	cache_filename(uri))	
						while	true	
								begin	
										response	=	if	@cache_http	&&	File.exists?(cache_path)	
												body	=	File.read(cache_path)	
												HTTP::Client::Response.new(200,	body)	
										else	
												@http_client.get(uri,	headers:	HTTP::Headers{	"User-Agent"	=>	CrMangaDownloadr::USER_AGENT	})	
										end	
										case	response.status_code	
										when	301	
												uri	=	response.headers["Location"]	
										when	200	
												if	(	binary	||	@cache_http	)	&&	!File.exists?(cache_path)	
														File.open(cache_path,	"w")	do	|f|	
																f.print	response.body	
														end	
												end	
												if	binary	
														return	cache_path	
												else	
														return	XML.parse_html(response.body)	
												end	
										end	
								rescue	IO::Timeout	
										puts	"Sleeping	over	#{uri}"	
										sleep	1	
								end	
						end	
				end	
		...	
end	
DownloadrClient
module	CrMangaDownloadr	
		class	DownloadrClient	
				...	
				def	get(uri	:	String,	binary	=	false)	
						Dir.mkdir_p(@config.cache_directory)	unless	Dir.exists?(@config.cache_directory)	
						cache_path	=	File.join(@config.cache_directory,	cache_filename(uri))	
						while	true	
								begin	
										response	=	if	@cache_http	&&	File.exists?(cache_path)	
												body	=	File.read(cache_path)	
												HTTP::Client::Response.new(200,	body)	
										else	
												@http_client.get(uri,	headers:	HTTP::Headers{	"User-Agent"	=>	CrMangaDownloadr::USER_AGENT	})	
										end	
										case	response.status_code	
										when	301	
												uri	=	response.headers["Location"]	
										when	200	
												if	(	binary	||	@cache_http	)	&&	!File.exists?(cache_path)	
														File.open(cache_path,	"w")	do	|f|	
																f.print	response.body	
														end	
												end	
												if	binary	
														return	cache_path	
												else	
														return	XML.parse_html(response.body)	
												end	
										end	
								rescue	IO::Timeout	
										puts	"Sleeping	over	#{uri}"	
										sleep	1	
								end	
						end	
				end	
		...	
end	
DownloadrClient
module	CrMangaDownloadr	
		class	DownloadrClient	
				...	
				def	get(uri	:	String,	binary	=	false)	
						Dir.mkdir_p(@config.cache_directory)	unless	Dir.exists?(@config.cache_directory)	
						cache_path	=	File.join(@config.cache_directory,	cache_filename(uri))	
						while	true	
								begin	
										response	=	if	@cache_http	&&	File.exists?(cache_path)	
												body	=	File.read(cache_path)	
												HTTP::Client::Response.new(200,	body)	
										else	
												@http_client.get(uri,	headers:	HTTP::Headers{	"User-Agent"	=>	CrMangaDownloadr::USER_AGENT	})	
										end	
										case	response.status_code	
										when	301	
												uri	=	response.headers["Location"]	
										when	200	
												if	(	binary	||	@cache_http	)	&&	!File.exists?(cache_path)	
														File.open(cache_path,	"w")	do	|f|	
																f.print	response.body	
														end	
												end	
												if	binary	
														return	cache_path	
												else	
														return	XML.parse_html(response.body)	
												end	
										end	
								rescue	IO::Timeout	
										puts	"Sleeping	over	#{uri}"	
										sleep	1	
								end	
						end	
				end	
		...	
end	
DownloadrClient
require	"fiberpool"	
module	CrMangaDownloadr	
		struct	Concurrency(A,	B)	
				def	initialize(@config	:	Config,	@engine_class	:	DownloadrClient.class)	
				end	
				def	fetch(collection	:	Array(A)?,	&block	:	A,	
				DownloadrClient	->	Array(B)?)	:	Array(B)	
						results	=	[]	of	B	
						if	collection	
								pool	=	Fiberpool.new(collection,	@config.download_batch_size)	
								pool.run	do	|item|	
										engine	=	@engine_class.new(@config)	
										if	reply	=	block.call(item,	engine)	
												results.concat(reply)	
										end	
								end	
						end	
						results	
				end	
		end	
end
fetch
Concurrency
require	"fiberpool"	
module	CrMangaDownloadr	
		struct	Concurrency(A,	B)	
				def	initialize(@config	:	Config,	@engine_class	:	DownloadrClient.class)	
				end	
				def	fetch(collection	:	Array(A)?,	&block	:	A,	
				DownloadrClient	->	Array(B)?)	:	Array(B)	
						results	=	[]	of	B	
						if	collection	
								pool	=	Fiberpool.new(collection,	@config.download_batch_size)	
								pool.run	do	|item|	
										engine	=	@engine_class.new(@config)	
										if	reply	=	block.call(item,	engine)	
												results.concat(reply)	
										end	
								end	
						end	
						results	
				end	
		end	
end
fetch
Concurrency
require	"fiberpool"	
module	CrMangaDownloadr	
		struct	Concurrency(A,	B)	
				def	initialize(@config	:	Config,	@engine_class	:	DownloadrClient.class)	
				end	
				def	fetch(collection	:	Array(A)?,	&block	:	A,	
				DownloadrClient	->	Array(B)?)	:	Array(B)	
						results	=	[]	of	B	
						if	collection	
								pool	=	Fiberpool.new(collection,	@config.download_batch_size)	
								pool.run	do	|item|	
										engine	=	@engine_class.new(@config)	
										if	reply	=	block.call(item,	engine)	
												results.concat(reply)	
										end	
								end	
						end	
						results	
				end	
		end	
end
fetch
Concurrency
require	"fiberpool"	
module	CrMangaDownloadr	
		struct	Concurrency(A,	B)	
				def	initialize(@config	:	Config,	@engine_class	:	DownloadrClient.class)	
				end	
				def	fetch(collection	:	Array(A)?,	&block	:	A,	
				DownloadrClient	->	Array(B)?)	:	Array(B)	
						results	=	[]	of	B	
						if	collection	
								pool	=	Fiberpool.new(collection,	@config.download_batch_size)	
								pool.run	do	|item|	
										engine	=	@engine_class.new(@config)	
										if	reply	=	block.call(item,	engine)	
												results.concat(reply)	
										end	
								end	
						end	
						results	
				end	
		end	
end
fetch
Concurrency
require	"fiberpool"	
module	CrMangaDownloadr	
		struct	Concurrency(A,	B)	
				def	initialize(@config	:	Config,	@engine_class	:	DownloadrClient.class)	
				end	
				def	fetch(collection	:	Array(A)?,	&block	:	A,	
				DownloadrClient	->	Array(B)?)	:	Array(B)	
						results	=	[]	of	B	
						if	collection	
								pool	=	Fiberpool.new(collection,	@config.download_batch_size)	
								pool.run	do	|item|	
										engine	=	@engine_class.new(@config)	
										if	reply	=	block.call(item,	engine)	
												results.concat(reply)	
										end	
								end	
						end	
						results	
				end	
		end	
end
fetch
Concurrency
require	"fiberpool"	
module	CrMangaDownloadr	
		struct	Concurrency(A,	B)	
				def	initialize(@config	:	Config,	@engine_class	:	DownloadrClient.class)	
				end	
				def	fetch(collection	:	Array(A)?,	&block	:	A,	
				DownloadrClient	->	Array(B)?)	:	Array(B)	
						results	=	[]	of	B	
						if	collection	
								pool	=	Fiberpool.new(collection,	@config.download_batch_size)	
								pool.run	do	|item|	
										engine	=	@engine_class.new(@config)	
										if	reply	=	block.call(item,	engine)	
												results.concat(reply)	
										end	
								end	
						end	
						results	
				end	
		end	
end
fetch
Concurrency
fetch
Concurrency
module	CrMangaDownloadr	
		class	Workflow	
		end	
		module	Steps	
				def	self.fetch_chapters(config	:	Config)	
				end	
				def	self.fetch_pages(chapters	:	Array(String)?,	config	:	Config)	
						puts	"Fetching	pages	from	all	chapters	..."	
						reactor	=	Concurrency(String,	String).new(config,	Pages)	
						reactor.fetch(chapters)	do	|link,	engine|	
								engine.try(&.fetch(link)).as(Array(String))	
						end	
				end	
				def	self.fetch_images(pages	:	Array(String)?,	config	:	Config)	
				end	
				def	self.download_images(images	:	Array(Image)?,	config	:	Config)	
				end	
				def	self.optimize_images(downloads	:	Array(String),	config	:	Config)	
				end	
				def	self.prepare_volumes(downloads	:	Array(String),	config	:	Config)	
				end	
		end	
end
fetch
Concurrency
module	CrMangaDownloadr	
		class	Workflow	
		end	
		module	Steps	
				def	self.fetch_chapters(config	:	Config)	
				end	
				def	self.fetch_pages(chapters	:	Array(String)?,	config	:	Config)	
						puts	"Fetching	pages	from	all	chapters	..."	
						reactor	=	Concurrency(String,	String).new(config,	Pages)	
						reactor.fetch(chapters)	do	|link,	engine|	
								engine.try(&.fetch(link)).as(Array(String))	
						end	
				end	
				def	self.fetch_images(pages	:	Array(String)?,	config	:	Config)	
				end	
				def	self.download_images(images	:	Array(Image)?,	config	:	Config)	
				end	
				def	self.optimize_images(downloads	:	Array(String),	config	:	Config)	
				end	
				def	self.prepare_volumes(downloads	:	Array(String),	config	:	Config)	
				end	
		end	
end
A Journey through New Languages - Rancho Dev 2017
crystal deps
crystal spec
crystal build src/cr_manga_downloadr.cr --release
crystal deps
crystal spec
crystal build src/cr_manga_downloadr.cr --release
cr_manga_downloadr 752K
time ./cr_manga_downloadr -t
time ./cr_manga_downloadr -t
5.57s user 6.79s system 14% cpu 1:26.76 total
A Journey through New Languages - Rancho Dev 2017
.
!"" _build
# $"" ...
!"" config
# $"" config.exs
!"" deps
# !"" ...
!"" ex_manga_downloadr
!"" lib
# !"" ex_manga_downloadr
# # !"" cli.ex
# # !"" mangafox
# # # !"" chapter_page.ex
# # # !"" index_page.ex
# # # $"" page.ex
# # !"" mangareader
# # # !"" chapter_page.ex
# # # !"" index_page.ex
# # # $"" page.ex
# # !"" manga_wrapper.ex
# # $"" workflow.ex
# $"" ex_manga_downloadr.ex
!"" mix.exs
!"" mix.lock
!"" README.md
$"" test
!"" ex_manga_downloadr
# !"" mangafox_test.exs
# $"" mangareader_test.exs
!"" ex_manga_downloadr_test.exs
$"" test_helper.exs
61 directories, 281 files
.
!"" cr_manga_downloadr
!"" libs
# !"" ...
!"" LICENSE
!"" README.md
!"" shard.lock
!"" shard.yml
!"" spec
# !"" cr_manga_downloadr
# # !"" chapters_spec.cr
# # !"" concurrency_spec.cr
# # !"" image_downloader_spec.cr
# # !"" page_image_spec.cr
# # $"" pages_spec.cr
# !"" fixtures
# # !"" ...
# $"" spec_helper.cr
$"" src
!"" cr_manga_downloadr
# !"" chapters.cr
# !"" concurrency.cr
# !"" downloadr_client.cr
# !"" image_downloader.cr
# !"" page_image.cr
# !"" pages.cr
# !"" records.cr
# !"" version.cr
# $"" workflow.cr
$"" cr_manga_downloadr.cr
.
!"" cr_manga_downloadr
!"" libs
# !"" ...
!"" LICENSE
!"" README.md
!"" shard.lock
!"" shard.yml
!"" spec
# !"" cr_manga_downloadr
# # !"" chapters_spec.cr
# # !"" concurrency_spec.cr
# # !"" image_downloader_spec.cr
# # !"" page_image_spec.cr
# # $"" pages_spec.cr
# !"" fixtures
# # !"" ...
# $"" spec_helper.cr
$"" src
!"" cr_manga_downloadr
# !"" chapters.cr
# !"" concurrency.cr
# !"" downloadr_client.cr
# !"" image_downloader.cr
# !"" page_image.cr
# !"" pages.cr
# !"" records.cr
# !"" version.cr
# $"" workflow.cr
$"" cr_manga_downloadr.cr
.
!"" bin
# $"" manga-downloadr
!"" Gemfile
!"" Gemfile.lock
!"" lib
# !"" manga-downloadr
# # !"" chapters.rb
# # !"" concurrency.rb
# # !"" downloadr_client.rb
# # !"" image_downloader.rb
# # !"" page_image.rb
# # !"" pages.rb
# # !"" records.rb
# # !"" version.rb
# # $"" workflow.rb
# $"" manga-downloadr.rb
!"" LICENSE.txt
!"" manga-downloadr.gemspec
!"" Rakefile
!"" README.md
$"" spec
!"" fixtures
# !"" ...
!"" manga-downloadr
# !"" chapters_spec.rb
# !"" concurrency_spec.rb
# !"" image_downloader_spec.rb
# !"" page_image_spec.rb
# $"" pages_spec.rb
$"" spec_helper.rb
.
!"" cr_manga_downloadr
!"" libs
# !"" ...
!"" LICENSE
!"" README.md
!"" shard.lock
!"" shard.yml
!"" spec
# !"" cr_manga_downloadr
# # !"" chapters_spec.cr
# # !"" concurrency_spec.cr
# # !"" image_downloader_spec.cr
# # !"" page_image_spec.cr
# # $"" pages_spec.cr
# !"" fixtures
# # !"" ...
# $"" spec_helper.cr
$"" src
!"" cr_manga_downloadr
# !"" chapters.cr
# !"" concurrency.cr
# !"" downloadr_client.cr
# !"" image_downloader.cr
# !"" page_image.cr
# !"" pages.cr
# !"" records.cr
# !"" version.cr
# $"" workflow.cr
$"" cr_manga_downloadr.cr
.
!"" bin
# $"" manga-downloadr
!"" Gemfile
!"" Gemfile.lock
!"" lib
# !"" manga-downloadr
# # !"" chapters.rb
# # !"" concurrency.rb
# # !"" downloadr_client.rb
# # !"" image_downloader.rb
# # !"" page_image.rb
# # !"" pages.rb
# # !"" records.rb
# # !"" version.rb
# # $"" workflow.rb
# $"" manga-downloadr.rb
!"" LICENSE.txt
!"" manga-downloadr.gemspec
!"" Rakefile
!"" README.md
$"" spec
!"" fixtures
# !"" ...
!"" manga-downloadr
# !"" chapters_spec.rb
# !"" concurrency_spec.rb
# !"" image_downloader_spec.rb
# !"" page_image_spec.rb
# $"" pages_spec.rb
$"" spec_helper.rb
.
!"" cr_manga_downloadr
!"" libs
# !"" ...
!"" LICENSE
!"" README.md
!"" shard.lock
!"" shard.yml
!"" spec
# !"" cr_manga_downloadr
# # !"" chapters_spec.cr
# # !"" concurrency_spec.cr
# # !"" image_downloader_spec.cr
# # !"" page_image_spec.cr
# # $"" pages_spec.cr
# !"" fixtures
# # !"" ...
# $"" spec_helper.cr
$"" src
!"" cr_manga_downloadr
# !"" chapters.cr
# !"" concurrency.cr
# !"" downloadr_client.cr
# !"" image_downloader.cr
# !"" page_image.cr
# !"" pages.cr
# !"" records.cr
# !"" version.cr
# $"" workflow.cr
$"" cr_manga_downloadr.cr
.
!"" bin
# $"" manga-downloadr
!"" Gemfile
!"" Gemfile.lock
!"" lib
# !"" manga-downloadr
# # !"" chapters.rb
# # !"" concurrency.rb
# # !"" downloadr_client.rb
# # !"" image_downloader.rb
# # !"" page_image.rb
# # !"" pages.rb
# # !"" records.rb
# # !"" version.rb
# # $"" workflow.rb
# $"" manga-downloadr.rb
!"" LICENSE.txt
!"" manga-downloadr.gemspec
!"" Rakefile
!"" README.md
$"" spec
!"" fixtures
# !"" ...
!"" manga-downloadr
# !"" chapters_spec.rb
# !"" concurrency_spec.rb
# !"" image_downloader_spec.rb
# !"" page_image_spec.rb
# $"" pages_spec.rb
$"" spec_helper.rb
def	run	
		Dir.mkdir_p	@config.download_directory	
		pipe	Steps.fetch_chapters(@config)	
				.>>	Steps.fetch_pages(@config)	
				.>>	Steps.fetch_images(@config)	
				.>>	Steps.download_images(@config)	
				.>>	Steps.optimize_images(@config)	
				.>>	Steps.prepare_volumes(@config)	
				.>>	unwrap	
		puts	"Done!"	
end
def	self.run(config	=	Config.new)	
		FileUtils.mkdir_p	config.download_directory	
		CM(config,	Workflow)	
				.fetch_chapters	
				.fetch_pages(config)	
				.fetch_images(config)	
				.download_images(config)	
				.optimize_images(config)	
				.prepare_volumes(config)	
				.unwrap	
		puts	"Done!"	
end
def	run	
		Dir.mkdir_p	@config.download_directory	
		pipe	Steps.fetch_chapters(@config)	
				.>>	Steps.fetch_pages(@config)	
				.>>	Steps.fetch_images(@config)	
				.>>	Steps.download_images(@config)	
				.>>	Steps.optimize_images(@config)	
				.>>	Steps.prepare_volumes(@config)	
				.>>	unwrap	
		puts	"Done!"	
end
A Journey through New Languages - Rancho Dev 2017
#	concurrency.cr	
pool	=	Fiberpool.new(collection,	@config.download_batch_size)	
pool.run	do	|item|	
		engine	=	@engine_class.new(@config)	
		if	reply	=	block.call(item,	engine)	
				results.concat(reply)	
		end	
end
#	concurrency.cr	
pool	=	Fiberpool.new(collection,	@config.download_batch_size)	
pool.run	do	|item|	
		engine	=	@engine_class.new(@config)	
		if	reply	=	block.call(item,	engine)	
				results.concat(reply)	
		end	
end
pool				=	Thread.pool(@config.download_batch_size)	
mutex			=	Mutex.new	
results	=	[]	
collection.each	do	|item|	
		pool.process	{	
				engine		=	@turn_on_engine	?	@engine_klass.new(@config.domain,	@config.cache_http)	:	nil	
				reply	=	block.call(item,	engine)&.flatten	
				mutex.synchronize	do	
						results	+=	(	reply	||	[]	)	
				end	
		}	
end	
pool.shutdown
Fibers
Threads
module	CrMangaDownloadr	
		class	Pages	<	DownloadrClient	
				def	fetch(chapter_link	:	String)	
						html	=	get(chapter_link)	
						nodes	=	html.xpath_nodes("//div[@id='selectpage']//select[@id='pageMenu']//option")	
						nodes.map	{	|node|	"#{chapter_link}/#{node.text}"	}	
				end	
		end	
end
module	CrMangaDownloadr	
		class	Pages	<	DownloadrClient	
				def	fetch(chapter_link	:	String)	
						html	=	get(chapter_link)	
						nodes	=	html.xpath_nodes("//div[@id='selectpage']//select[@id='pageMenu']//option")	
						nodes.map	{	|node|	"#{chapter_link}/#{node.text}"	}	
				end	
		end	
end
module	MangaDownloadr	
		class	Pages	<	DownloadrClient	
				def	fetch(chapter_link)	
						get	chapter_link	do	|html|	
								nodes	=	html.xpath("//div[@id='selectpage']//select[@id='pageMenu']//option")	
								nodes.map	{	|node|	[chapter_link,	node.children.to_s].join("/")	}	
						end	
				end	
		end	
end
time bin/manga-downloadr -t
time bin/manga-downloadr -t
19.77s user 10.65s system 33% cpu 1:31.69 total
A Journey through New Languages - Rancho Dev 2017
A Journey through New Languages - Rancho Dev 2017
Ruby/Typhoeus

(hydra_concurrency = 50)
41% CPU 1:24 min
Ruby/Typhoeus

(hydra_concurrency = 50)
41% CPU 1:24 min
Elixir 1.4.5

(@max_demand=50)
120% CPU 1:14 min
Ruby/Typhoeus

(hydra_concurrency = 50)
41% CPU 1:24 min
Elixir 1.4.5

(@max_demand=50)
120% CPU 1:14 min
Crystal 0.23.0

(opt_batch_size = 50)
14% CPU 1:26 min
Ruby/Typhoeus

(hydra_concurrency = 50)
41% CPU 1:24 min
Elixir 1.4.5

(@max_demand=50)
120% CPU 1:14 min
Crystal 0.23.0

(opt_batch_size = 50)
14% CPU 1:26 min
Ruby 2.4.1

(opt_batch_size = 50)
33% CPU 1:31 min
A Journey through New Languages - Rancho Dev 2017
Ruby Typhoeus libcurl
Ruby Typhoeus libcurl
Elixir OTP Poolboy
Ruby Typhoeus libcurl
Elixir OTP Poolboy
Crystal Fibers Fiberpool
Ruby Typhoeus libcurl
Elixir OTP Poolboy
Crystal Fibers Fiberpool
Ruby Thread Thread/Pool
A Journey through New Languages - Rancho Dev 2017
A Journey through New Languages - Rancho Dev 2017
manga-downloadr
ex_manga_downloadr
cr_manga_downloadr
manga-downloadr
ex_manga_downloadr
cr_manga_downloadr
fiberpool
cr_chainable_methods
chainable_methods
A Journey through New Languages - Rancho Dev 2017
A Journey through New Languages - Rancho Dev 2017
A Journey through New Languages - Rancho Dev 2017
A Journey through New Languages - Rancho Dev 2017
A Journey through New Languages - Rancho Dev 2017
PREMATURE
OPTIMIZATION
The Root of ALL Evil
THANKS
@akitaonrails
slideshare.net/akitaonrails
www.theconf.club

Más contenido relacionado

La actualidad más candente

Happy Go Programming
Happy Go ProgrammingHappy Go Programming
Happy Go ProgrammingLin Yo-An
 
7 Common Mistakes in Go (2015)
7 Common Mistakes in Go (2015)7 Common Mistakes in Go (2015)
7 Common Mistakes in Go (2015)Steven Francia
 
7 Common mistakes in Go and when to avoid them
7 Common mistakes in Go and when to avoid them7 Common mistakes in Go and when to avoid them
7 Common mistakes in Go and when to avoid themSteven Francia
 
Go for Object Oriented Programmers or Object Oriented Programming without Obj...
Go for Object Oriented Programmers or Object Oriented Programming without Obj...Go for Object Oriented Programmers or Object Oriented Programming without Obj...
Go for Object Oriented Programmers or Object Oriented Programming without Obj...Steven Francia
 
Vim Script Programming
Vim Script ProgrammingVim Script Programming
Vim Script ProgrammingLin Yo-An
 
You are in a maze of deeply nested maps, all alike
You are in a maze of deeply nested maps, all alikeYou are in a maze of deeply nested maps, all alike
You are in a maze of deeply nested maps, all alikeEric Normand
 
Painless Data Storage with MongoDB & Go
Painless Data Storage with MongoDB & Go Painless Data Storage with MongoDB & Go
Painless Data Storage with MongoDB & Go Steven Francia
 
Practical pig
Practical pigPractical pig
Practical pigtrihug
 
DPC 2012 : PHP in the Dark Workshop
DPC 2012 : PHP in the Dark WorkshopDPC 2012 : PHP in the Dark Workshop
DPC 2012 : PHP in the Dark WorkshopJeroen Keppens
 
Getting Started with Go
Getting Started with GoGetting Started with Go
Getting Started with GoSteven Francia
 
Building Awesome CLI apps in Go
Building Awesome CLI apps in GoBuilding Awesome CLI apps in Go
Building Awesome CLI apps in GoSteven Francia
 

La actualidad más candente (12)

Happy Go Programming
Happy Go ProgrammingHappy Go Programming
Happy Go Programming
 
7 Common Mistakes in Go (2015)
7 Common Mistakes in Go (2015)7 Common Mistakes in Go (2015)
7 Common Mistakes in Go (2015)
 
7 Common mistakes in Go and when to avoid them
7 Common mistakes in Go and when to avoid them7 Common mistakes in Go and when to avoid them
7 Common mistakes in Go and when to avoid them
 
Go for Object Oriented Programmers or Object Oriented Programming without Obj...
Go for Object Oriented Programmers or Object Oriented Programming without Obj...Go for Object Oriented Programmers or Object Oriented Programming without Obj...
Go for Object Oriented Programmers or Object Oriented Programming without Obj...
 
Power of Puppet 4
Power of Puppet 4Power of Puppet 4
Power of Puppet 4
 
Vim Script Programming
Vim Script ProgrammingVim Script Programming
Vim Script Programming
 
You are in a maze of deeply nested maps, all alike
You are in a maze of deeply nested maps, all alikeYou are in a maze of deeply nested maps, all alike
You are in a maze of deeply nested maps, all alike
 
Painless Data Storage with MongoDB & Go
Painless Data Storage with MongoDB & Go Painless Data Storage with MongoDB & Go
Painless Data Storage with MongoDB & Go
 
Practical pig
Practical pigPractical pig
Practical pig
 
DPC 2012 : PHP in the Dark Workshop
DPC 2012 : PHP in the Dark WorkshopDPC 2012 : PHP in the Dark Workshop
DPC 2012 : PHP in the Dark Workshop
 
Getting Started with Go
Getting Started with GoGetting Started with Go
Getting Started with Go
 
Building Awesome CLI apps in Go
Building Awesome CLI apps in GoBuilding Awesome CLI apps in Go
Building Awesome CLI apps in Go
 

Similar a A Journey through New Languages - Rancho Dev 2017

InterCon 2016 - Performance, anti-patterns e stacks para desenvolvimento ágil
InterCon 2016 - Performance, anti-patterns e stacks para desenvolvimento ágilInterCon 2016 - Performance, anti-patterns e stacks para desenvolvimento ágil
InterCon 2016 - Performance, anti-patterns e stacks para desenvolvimento ágiliMasters
 
Introducing Command Line Applications with Ruby
Introducing Command Line Applications with RubyIntroducing Command Line Applications with Ruby
Introducing Command Line Applications with RubyNikhil Mungel
 
Logrotate sh
Logrotate shLogrotate sh
Logrotate shBen Pope
 
Python utan-stodhjul-motorsag
Python utan-stodhjul-motorsagPython utan-stodhjul-motorsag
Python utan-stodhjul-motorsagniklal
 
Unleash your inner console cowboy
Unleash your inner console cowboyUnleash your inner console cowboy
Unleash your inner console cowboyKenneth Geisshirt
 
How to Begin Developing Ruby Core
How to Begin Developing Ruby CoreHow to Begin Developing Ruby Core
How to Begin Developing Ruby CoreHiroshi SHIBATA
 
2.1.using the shell
2.1.using the shell2.1.using the shell
2.1.using the shelldonv214
 
Semantic Pipes (London Perl Workshop 2009)
Semantic Pipes (London Perl Workshop 2009)Semantic Pipes (London Perl Workshop 2009)
Semantic Pipes (London Perl Workshop 2009)osfameron
 
Profile hadoop apps
Profile hadoop appsProfile hadoop apps
Profile hadoop appsBasant Verma
 
Ordina Accelerator program 2019 - Linux
Ordina Accelerator program 2019 - LinuxOrdina Accelerator program 2019 - Linux
Ordina Accelerator program 2019 - LinuxBert Koorengevel
 
Unix Shell Scripting Basics
Unix Shell Scripting BasicsUnix Shell Scripting Basics
Unix Shell Scripting BasicsSudharsan S
 
Unleash your inner console cowboy
Unleash your inner console cowboyUnleash your inner console cowboy
Unleash your inner console cowboyKenneth Geisshirt
 
Unix shell scripting basics
Unix shell scripting basicsUnix shell scripting basics
Unix shell scripting basicsManav Prasad
 
Devops for beginners
Devops for beginnersDevops for beginners
Devops for beginnersVivek Parihar
 
BITS: Introduction to Linux - Text manipulation tools for bioinformatics
BITS: Introduction to Linux - Text manipulation tools for bioinformaticsBITS: Introduction to Linux - Text manipulation tools for bioinformatics
BITS: Introduction to Linux - Text manipulation tools for bioinformaticsBITS
 
Linux basics by Raj Miraje
Linux basics by Raj MirajeLinux basics by Raj Miraje
Linux basics by Raj MirajeRaj Mirje
 
To GO or not to GO
To GO or not to GOTo GO or not to GO
To GO or not to GOsuperstas88
 

Similar a A Journey through New Languages - Rancho Dev 2017 (20)

InterCon 2016 - Performance, anti-patterns e stacks para desenvolvimento ágil
InterCon 2016 - Performance, anti-patterns e stacks para desenvolvimento ágilInterCon 2016 - Performance, anti-patterns e stacks para desenvolvimento ágil
InterCon 2016 - Performance, anti-patterns e stacks para desenvolvimento ágil
 
Introducing Command Line Applications with Ruby
Introducing Command Line Applications with RubyIntroducing Command Line Applications with Ruby
Introducing Command Line Applications with Ruby
 
Logrotate sh
Logrotate shLogrotate sh
Logrotate sh
 
Python utan-stodhjul-motorsag
Python utan-stodhjul-motorsagPython utan-stodhjul-motorsag
Python utan-stodhjul-motorsag
 
Unleash your inner console cowboy
Unleash your inner console cowboyUnleash your inner console cowboy
Unleash your inner console cowboy
 
How to Begin Developing Ruby Core
How to Begin Developing Ruby CoreHow to Begin Developing Ruby Core
How to Begin Developing Ruby Core
 
2.1.using the shell
2.1.using the shell2.1.using the shell
2.1.using the shell
 
Semantic Pipes (London Perl Workshop 2009)
Semantic Pipes (London Perl Workshop 2009)Semantic Pipes (London Perl Workshop 2009)
Semantic Pipes (London Perl Workshop 2009)
 
Profile hadoop apps
Profile hadoop appsProfile hadoop apps
Profile hadoop apps
 
Shell scripting
Shell scriptingShell scripting
Shell scripting
 
Intro to-puppet
Intro to-puppetIntro to-puppet
Intro to-puppet
 
Ordina Accelerator program 2019 - Linux
Ordina Accelerator program 2019 - LinuxOrdina Accelerator program 2019 - Linux
Ordina Accelerator program 2019 - Linux
 
Unix Shell Scripting Basics
Unix Shell Scripting BasicsUnix Shell Scripting Basics
Unix Shell Scripting Basics
 
Unleash your inner console cowboy
Unleash your inner console cowboyUnleash your inner console cowboy
Unleash your inner console cowboy
 
Unix shell scripting basics
Unix shell scripting basicsUnix shell scripting basics
Unix shell scripting basics
 
Devops for beginners
Devops for beginnersDevops for beginners
Devops for beginners
 
BITS: Introduction to Linux - Text manipulation tools for bioinformatics
BITS: Introduction to Linux - Text manipulation tools for bioinformaticsBITS: Introduction to Linux - Text manipulation tools for bioinformatics
BITS: Introduction to Linux - Text manipulation tools for bioinformatics
 
Linux basics by Raj Miraje
Linux basics by Raj MirajeLinux basics by Raj Miraje
Linux basics by Raj Miraje
 
Unix for Librarians
Unix for LibrariansUnix for Librarians
Unix for Librarians
 
To GO or not to GO
To GO or not to GOTo GO or not to GO
To GO or not to GO
 

Más de Fabio Akita

Devconf 2019 - São Carlos
Devconf 2019 - São CarlosDevconf 2019 - São Carlos
Devconf 2019 - São CarlosFabio Akita
 
Meetup Nerdzão - English Talk about Languages
Meetup Nerdzão  - English Talk about LanguagesMeetup Nerdzão  - English Talk about Languages
Meetup Nerdzão - English Talk about LanguagesFabio Akita
 
Desmistificando Blockchains p/ Developers - Criciuma Dev Conf 2018
Desmistificando Blockchains p/ Developers - Criciuma Dev Conf 2018Desmistificando Blockchains p/ Developers - Criciuma Dev Conf 2018
Desmistificando Blockchains p/ Developers - Criciuma Dev Conf 2018Fabio Akita
 
Desmistificando Blockchains - 20o Encontro Locaweb SP
Desmistificando Blockchains - 20o Encontro Locaweb SPDesmistificando Blockchains - 20o Encontro Locaweb SP
Desmistificando Blockchains - 20o Encontro Locaweb SPFabio Akita
 
Desmistificando Blockchains - Insiter Goiania
Desmistificando Blockchains - Insiter GoianiaDesmistificando Blockchains - Insiter Goiania
Desmistificando Blockchains - Insiter GoianiaFabio Akita
 
Blockchain em 7 minutos - 7Masters
Blockchain em 7 minutos - 7MastersBlockchain em 7 minutos - 7Masters
Blockchain em 7 minutos - 7MastersFabio Akita
 
Elixir -Tolerância a Falhas para Adultos - GDG Campinas
Elixir  -Tolerância a Falhas para Adultos - GDG CampinasElixir  -Tolerância a Falhas para Adultos - GDG Campinas
Elixir -Tolerância a Falhas para Adultos - GDG CampinasFabio Akita
 
Desmistificando Mitos de Tech Startups - Intercon 2017
Desmistificando Mitos de Tech Startups - Intercon 2017Desmistificando Mitos de Tech Startups - Intercon 2017
Desmistificando Mitos de Tech Startups - Intercon 2017Fabio Akita
 
Uma Discussão sobre a Carreira de TI
Uma Discussão sobre a Carreira de TIUma Discussão sobre a Carreira de TI
Uma Discussão sobre a Carreira de TIFabio Akita
 
THE CONF - Opening Keynote
THE CONF - Opening KeynoteTHE CONF - Opening Keynote
THE CONF - Opening KeynoteFabio Akita
 
Desmistificando Mitos de Startups - Sebrae - AP
Desmistificando Mitos de Startups - Sebrae - APDesmistificando Mitos de Startups - Sebrae - AP
Desmistificando Mitos de Startups - Sebrae - APFabio Akita
 
Premature Optimization 2.0 - Intercon 2016
Premature Optimization 2.0 - Intercon 2016Premature Optimization 2.0 - Intercon 2016
Premature Optimization 2.0 - Intercon 2016Fabio Akita
 
Conexão Kinghost - Otimização Prematura
Conexão Kinghost - Otimização PrematuraConexão Kinghost - Otimização Prematura
Conexão Kinghost - Otimização PrematuraFabio Akita
 
The Open Commerce Conference - Premature Optimisation: The Root of All Evil
The Open Commerce Conference - Premature Optimisation: The Root of All EvilThe Open Commerce Conference - Premature Optimisation: The Root of All Evil
The Open Commerce Conference - Premature Optimisation: The Root of All EvilFabio Akita
 
Premature optimisation: The Root of All Evil
Premature optimisation: The Root of All EvilPremature optimisation: The Root of All Evil
Premature optimisation: The Root of All EvilFabio Akita
 
Elixir - Tolerância a Falhas para Adultos - Secot VIII Sorocaba
Elixir - Tolerância a Falhas para Adultos - Secot VIII SorocabaElixir - Tolerância a Falhas para Adultos - Secot VIII Sorocaba
Elixir - Tolerância a Falhas para Adultos - Secot VIII SorocabaFabio Akita
 
Elixir: Tolerância a Falhas para Adultos - OneDay Baixada Santista
Elixir: Tolerância a Falhas para Adultos - OneDay Baixada SantistaElixir: Tolerância a Falhas para Adultos - OneDay Baixada Santista
Elixir: Tolerância a Falhas para Adultos - OneDay Baixada SantistaFabio Akita
 
Evento Codeminer UFRN 2016
Evento Codeminer UFRN 2016Evento Codeminer UFRN 2016
Evento Codeminer UFRN 2016Fabio Akita
 
QCON SP 2016 - Elixir: Tolerância a Falhas para Adultos
QCON SP 2016 - Elixir: Tolerância a Falhas para AdultosQCON SP 2016 - Elixir: Tolerância a Falhas para Adultos
QCON SP 2016 - Elixir: Tolerância a Falhas para AdultosFabio Akita
 
"Elixir of Life" - Dev In Santos
"Elixir of Life" - Dev In Santos"Elixir of Life" - Dev In Santos
"Elixir of Life" - Dev In SantosFabio Akita
 

Más de Fabio Akita (20)

Devconf 2019 - São Carlos
Devconf 2019 - São CarlosDevconf 2019 - São Carlos
Devconf 2019 - São Carlos
 
Meetup Nerdzão - English Talk about Languages
Meetup Nerdzão  - English Talk about LanguagesMeetup Nerdzão  - English Talk about Languages
Meetup Nerdzão - English Talk about Languages
 
Desmistificando Blockchains p/ Developers - Criciuma Dev Conf 2018
Desmistificando Blockchains p/ Developers - Criciuma Dev Conf 2018Desmistificando Blockchains p/ Developers - Criciuma Dev Conf 2018
Desmistificando Blockchains p/ Developers - Criciuma Dev Conf 2018
 
Desmistificando Blockchains - 20o Encontro Locaweb SP
Desmistificando Blockchains - 20o Encontro Locaweb SPDesmistificando Blockchains - 20o Encontro Locaweb SP
Desmistificando Blockchains - 20o Encontro Locaweb SP
 
Desmistificando Blockchains - Insiter Goiania
Desmistificando Blockchains - Insiter GoianiaDesmistificando Blockchains - Insiter Goiania
Desmistificando Blockchains - Insiter Goiania
 
Blockchain em 7 minutos - 7Masters
Blockchain em 7 minutos - 7MastersBlockchain em 7 minutos - 7Masters
Blockchain em 7 minutos - 7Masters
 
Elixir -Tolerância a Falhas para Adultos - GDG Campinas
Elixir  -Tolerância a Falhas para Adultos - GDG CampinasElixir  -Tolerância a Falhas para Adultos - GDG Campinas
Elixir -Tolerância a Falhas para Adultos - GDG Campinas
 
Desmistificando Mitos de Tech Startups - Intercon 2017
Desmistificando Mitos de Tech Startups - Intercon 2017Desmistificando Mitos de Tech Startups - Intercon 2017
Desmistificando Mitos de Tech Startups - Intercon 2017
 
Uma Discussão sobre a Carreira de TI
Uma Discussão sobre a Carreira de TIUma Discussão sobre a Carreira de TI
Uma Discussão sobre a Carreira de TI
 
THE CONF - Opening Keynote
THE CONF - Opening KeynoteTHE CONF - Opening Keynote
THE CONF - Opening Keynote
 
Desmistificando Mitos de Startups - Sebrae - AP
Desmistificando Mitos de Startups - Sebrae - APDesmistificando Mitos de Startups - Sebrae - AP
Desmistificando Mitos de Startups - Sebrae - AP
 
Premature Optimization 2.0 - Intercon 2016
Premature Optimization 2.0 - Intercon 2016Premature Optimization 2.0 - Intercon 2016
Premature Optimization 2.0 - Intercon 2016
 
Conexão Kinghost - Otimização Prematura
Conexão Kinghost - Otimização PrematuraConexão Kinghost - Otimização Prematura
Conexão Kinghost - Otimização Prematura
 
The Open Commerce Conference - Premature Optimisation: The Root of All Evil
The Open Commerce Conference - Premature Optimisation: The Root of All EvilThe Open Commerce Conference - Premature Optimisation: The Root of All Evil
The Open Commerce Conference - Premature Optimisation: The Root of All Evil
 
Premature optimisation: The Root of All Evil
Premature optimisation: The Root of All EvilPremature optimisation: The Root of All Evil
Premature optimisation: The Root of All Evil
 
Elixir - Tolerância a Falhas para Adultos - Secot VIII Sorocaba
Elixir - Tolerância a Falhas para Adultos - Secot VIII SorocabaElixir - Tolerância a Falhas para Adultos - Secot VIII Sorocaba
Elixir - Tolerância a Falhas para Adultos - Secot VIII Sorocaba
 
Elixir: Tolerância a Falhas para Adultos - OneDay Baixada Santista
Elixir: Tolerância a Falhas para Adultos - OneDay Baixada SantistaElixir: Tolerância a Falhas para Adultos - OneDay Baixada Santista
Elixir: Tolerância a Falhas para Adultos - OneDay Baixada Santista
 
Evento Codeminer UFRN 2016
Evento Codeminer UFRN 2016Evento Codeminer UFRN 2016
Evento Codeminer UFRN 2016
 
QCON SP 2016 - Elixir: Tolerância a Falhas para Adultos
QCON SP 2016 - Elixir: Tolerância a Falhas para AdultosQCON SP 2016 - Elixir: Tolerância a Falhas para Adultos
QCON SP 2016 - Elixir: Tolerância a Falhas para Adultos
 
"Elixir of Life" - Dev In Santos
"Elixir of Life" - Dev In Santos"Elixir of Life" - Dev In Santos
"Elixir of Life" - Dev In Santos
 

Último

Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )Brian Pichman
 
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...DianaGray10
 
Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024D Cloud Solutions
 
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfIaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfDaniel Santiago Silva Capera
 
UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6DianaGray10
 
Building AI-Driven Apps Using Semantic Kernel.pptx
Building AI-Driven Apps Using Semantic Kernel.pptxBuilding AI-Driven Apps Using Semantic Kernel.pptx
Building AI-Driven Apps Using Semantic Kernel.pptxUdaiappa Ramachandran
 
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDEADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDELiveplex
 
Introduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptxIntroduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptxMatsuo Lab
 
UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1DianaGray10
 
VoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXVoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXTarek Kalaji
 
Bird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystemBird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystemAsko Soukka
 
Linked Data in Production: Moving Beyond Ontologies
Linked Data in Production: Moving Beyond OntologiesLinked Data in Production: Moving Beyond Ontologies
Linked Data in Production: Moving Beyond OntologiesDavid Newbury
 
NIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopNIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopBachir Benyammi
 
Comparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioComparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioChristian Posta
 
COMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online CollaborationCOMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online Collaborationbruanjhuli
 
Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024SkyPlanner
 
AI You Can Trust - Ensuring Success with Data Integrity Webinar
AI You Can Trust - Ensuring Success with Data Integrity WebinarAI You Can Trust - Ensuring Success with Data Integrity Webinar
AI You Can Trust - Ensuring Success with Data Integrity WebinarPrecisely
 
Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfAijun Zhang
 
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfJamie (Taka) Wang
 

Último (20)

Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )Building Your Own AI Instance (TBLC AI )
Building Your Own AI Instance (TBLC AI )
 
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
 
Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024
 
20230104 - machine vision
20230104 - machine vision20230104 - machine vision
20230104 - machine vision
 
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfIaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
 
UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6UiPath Studio Web workshop series - Day 6
UiPath Studio Web workshop series - Day 6
 
Building AI-Driven Apps Using Semantic Kernel.pptx
Building AI-Driven Apps Using Semantic Kernel.pptxBuilding AI-Driven Apps Using Semantic Kernel.pptx
Building AI-Driven Apps Using Semantic Kernel.pptx
 
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDEADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
ADOPTING WEB 3 FOR YOUR BUSINESS: A STEP-BY-STEP GUIDE
 
Introduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptxIntroduction to Matsuo Laboratory (ENG).pptx
Introduction to Matsuo Laboratory (ENG).pptx
 
UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1
 
VoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXVoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBX
 
Bird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystemBird eye's view on Camunda open source ecosystem
Bird eye's view on Camunda open source ecosystem
 
Linked Data in Production: Moving Beyond Ontologies
Linked Data in Production: Moving Beyond OntologiesLinked Data in Production: Moving Beyond Ontologies
Linked Data in Production: Moving Beyond Ontologies
 
NIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopNIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 Workshop
 
Comparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioComparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and Istio
 
COMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online CollaborationCOMPUTER 10: Lesson 7 - File Storage and Online Collaboration
COMPUTER 10: Lesson 7 - File Storage and Online Collaboration
 
Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024Salesforce Miami User Group Event - 1st Quarter 2024
Salesforce Miami User Group Event - 1st Quarter 2024
 
AI You Can Trust - Ensuring Success with Data Integrity Webinar
AI You Can Trust - Ensuring Success with Data Integrity WebinarAI You Can Trust - Ensuring Success with Data Integrity Webinar
AI You Can Trust - Ensuring Success with Data Integrity Webinar
 
Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdf
 
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
 

A Journey through New Languages - Rancho Dev 2017