SBT extract

Why SBT

  1. Short, concise DSL, can be extended by pure Scala code
  2. Interactivity
  3. Background execution
  4. Default parallel execution (restriction on CPU, network and disk can be specified)
  5. Scala REPL integration
  6. Incremental compilation
  7. Default folder structure (can be adjusted)
  8. Defined workflow (can be adjusted or redefined)
  9. Type safety
  10. Direct dataflow between tasks is supported
  11. Simple script entities hierarchy, just tasks and settings, some already defined, but it is easy to add custom
  12. Crossbuild (for several Scala versions in parallel)
  13. Plugin extensible

Folder structure

  • <root>/project/plugins.sbt
  • <root>/project/build.properties
  • <root>/build.sbt

SBT tasks, executing items, can depend on other tasks (use other task return value inside body), can accept a user input.

  • Declare key: val keyName = taskKey[keyType](“key description”)
  • Assign value: keyName := …
  • Get value: keyName.value

SBT setting – just a named value, can dependent only no literal or value of other setting. The exact value is determined during starting script up. It cannot depend on some task return value.

  • settingName := settingValue – for assign (redefine, if already defined)
  • settingName += settingValue – for append single value to Seq
  • settingName ++= settingValue – for append Seq to Seq

Scopes

  • project
  • configurations – namespaces for keys (default: Compile,Test, Runtime, IntegrationTest)
  • task
  • global – default, if not specified

Multiproject – can be declared as single or multiple (own for each project) sbt file. Abstract parent project can have common settings, added or redefined by concrete child projects. dependsOn – defines dependency.

Sources (compile/test configurations):

  • location settings: javaSource, resourceDirectory, scalaSource.
  • filtering: includeFilter, excludeFilter.
  • Managed: autogenerated by SBT or added explicitly into build.
  • Unmanaged: created outside of SBT, written by coder.

Dependencies (compile/test/runtime):

  • internal (between projects) or external (on some lib outside – maven / ivy)
  • external can be: managed (maven / ivy) or unmanaged (jars from lib folder)
  • resolvers – setting that can be added with additional maven/ivy external repositories.

Dependency format:  ModuleID – “groupID/organisation” % or %% “artifactID/product” % “version” (optional: “test”, “provided”)

  • exclude – specified dependency will be omitted (additionaly rules can be applied)
  • classifier – additional parameters, like JDK version
  • intransitive or notTransitive – do not load dependencies
  • withSources
  • withJavadoc
  • externalPom
  • externalIvy

Forking – execution Test or Run in separate JVM, custom settings can be applied

Session – memory mapped SBT configuration, will be lost after reload, can be saved as SBT file.

SBT script troubleshooting: streams.value.log

Extending SBT: commands and plugins

Publishing artifact: publishTo

 

Functional approach specifics

Pure functions – no side effects, nothing else than transforming input parameters into returning value. State is kept outside, it can be passed into function and returned from it.

Techniques:

  • First-class citizen functions
  • High order functions
  • Anonymous functions
  • Closures
  • Currying
  • Lazy evaluation

Referential transparency – function call can be replaced by its returning value

Higher-order-function – function that accepts as an input other function

Partial function application (currying) – transform function with two parameters into function with one parameter, giving the dropped parameter a some value

Functional data structures – are immutable.

Instead of exceptions -> Option, Either, Result – returning values

Laziness – evaluating parameter when it used, but not declared

Memoization – remembering some resource-consuming operation’s result, and returning for later calls (some kind of caching)

Strict functions – all parameters are evaluated before function call, non-strict functions – with lazy parameters

Recursive functions consumes a data, corecursive – produces, both need terminating condition.

Monoid – set of some type, binary operation that takes two instances of the type and returns a new instance of the type, and empty “identity” element

Monads

  • computation context for other value
  • container, with two (or three) function defined: unit (constructor) and flatMap OR unit and compose OR unit, map, join

It is always possible to transform impure function into pure one, with two side-effects, one producing the pure function input, other consumes pure function output.

Effect – an object that contains an operation that produces side-effect, it has a method that executes a side-effect

Side-effect – is actually non-functional operation

Docker notes

Terms and definitions

  • Image – particular set of layers
  • Repository – set if images, fully identified by <host name>/<user name>/<repository name>
  • Tag – particular image name (docker tag command)
  • Index/registry – catalog of repositories

Docker architecture

  • Docker daemon – does all work, has REST interface (can be exposed out from PC)
  • Docker client – connects docker-daemon
  • Each container has its own PIDs’ tree
  • Possible state of container: running, paused, restarting, exited

Create an image

  1. Altering manually some running container and committing the changes
  2. Docker-file
  3. Docker-file and some external configuration tool
  4. Docker-file and TAR-file (that contains all files from existing PC) enrolled over zero-image

Commands

  • “docker run” command
    • docker run -it <image name> – run container interactively, with connected console
    • docker run -d <image name> – run detached
    • Giving name to container –name <name>
    • –read-only – filesystem of container can be read only
    • –restart =<policy: always, no, on-failure>
    • –link <imagename:containername> binds containers via exposed port
    • –device – maps hosts devices into container
    • –ipc – shares ipc items between containers
    • –cpuset-cpus – limits CPU cores used by container
    • -c/–cpu-shares – percentage of CPU allowed for container
    • -m/–memory – limits amount of memory, accessible for container
  • Port mapping -p <host port number:container port number> <image name>, or -P – map all exposed ports
    • –icc=false – disables network communication between containers
    • –expose <port number>
    • –hostname <name>
    • –dns <ip addresses’ array>
    • –add-host <host name>:<ip address>
    • –link – connects containers by name, since before container starts, no ip address are known
  • Docker network archetypes:
    • closed (only loopback), –net none
    • bridged (can communicate one to another, but have to be explicitly configured to access external network), –net bridge
    • joined (different containers shares the same network stack), –net container
    • opened (connected directly to external network), –net host
  • Adding environment variable —env/-e <name>=<value>
  • Restarting containers –restart with options: never (default), always, on failure (with optional delay)
  • Volumes mapping – map host file system to container’s file system. Mapped folders are not committed, mapped folders hides existing container’s folder with the same name
    • -v/–volume <host path>:<container path> – for mounted host’s paths, can be mounted as read-only
    • -v/–volume <container path> – for docker-managed volume
    • Data only container (no need to run ever), can map some host folder and other containers can just reference it, to obtain the mapped folder
    • –volumes-from <container name>
  • –rm – removes container after it is exited
  • “docker inspect” – Return metadata about some image, it is JSON formatted, fancy filtering syntax
  • “docker kill” – kills container
  • “docker stop” – stops container (gracefully)
  • “docker build” – creates new image from docker-file, –no-cache – build all commands, otherwise only changed docker file instructions will be built, non-changed will be taken from store cached at previous build
  • “docker tag” – gives a name to particular image
  • “docker commit” – creates an image from running container, only filesystem’s changes are preserved.
  • “docker exec” – execute a command over running container (basic- synchronously, daemon- background, interactive)
  • “docker search” – search image at registry
  • “docker history” – lists commands executed in order to build the image specified
  • “docker help” <command name>
  • “docker ps” – list containers run
  • “docker logs” <container name> – shows an output (stdout stderr), -f option allows autoscroll output logs
  • “docker restart” <container name> – restarts container
  • “docker rename” <new container name> <old container name>
  • “docker ps” (-a) – list of running (and other states) containers
  • “docker create” – creates, but doesn’t starts container (exited state)
  • “docker start” <container name> – start exited container
  • “docker top” <container name> – lists all processes running inside container
  • “docker rm” <container name> – remove exited container
  • “docker login/logout” – access a registry
  • “docker rmi” <repository name> – remove local repository
  • “docker rm” <container name> – remove container, -v – remove/decrement reference on docker managed volumes
  • “docker save” – saves an image as a file
  • “docker load” – loads image from file
  • “docker diff” <container name> – shows filesystem differences between container and its image
  • “docker export” – saves a container as a tar-archive
  • “docker import” – loads a container from a tar-archive
  • Docker machine – turns some PC (virtual or real) into host for running containers, runs instance of “machine” – the process is a platform for running containers. It is command line utility with a several commands: create, ls, stop, start, restart, rm, kill, inspect, config, ip, url, upgrade.

Docker-file – script that adjust an image before running its as a container

  • FROM – existing image tag
  • MAINTAINER – author’s mail
  • ONBUILD – executes a command specified at build step
  • RUN – command to run
  • USER – sets user and group
  • WORKDIR – sets current directory
  • EXPOSE – port erxposed
  • ADD – add files into container from some host, unpacks a tar-file
  • COPY – like ADD, but with no unpacking
  • CMD – commands that executed as container’s main process
  • ENTRYPOINT – like CMD but with no parameters (expected provided via run commnad)

Compose – run an application, represented by a set of containers, represented by yaml-file

  • “docker up” – run a dicker yaml file with definitions
  • “docker-compose ps” – all containers run by yaml-file
  • “docker-compose rm” – remove all containers represented by yaml-file
  • “docker-compose stop/kill” – like “docker”
  • “docker-compose logs” – like “docker”
  • “docker-compose build” – like “docker”
  • “docker-compose scale” – alter a number of instances of containers

Docker-machine – represents driver that allows to run a docker-daemon on different hosts

docker-swarm – cluster of machines to run a container, it can balance on basis of accessible resources (Spread algorithm) as well as on basis of custom filters (affinity, health, constraint, port, dependency), with builtin service discovery

Patterns of Enterprise Application Architecture – notes

Basic

  • Gateway – represent access to some system or resource
  • Registry
  • Value object

Domain logic

  • Transaction script – simple function between request arrived and data persisted
  • Domain model – contains both: behaviour and data
  • Service layer – wraps model with secondary operations like security, logging, etc

Datasource

  • Table Data gateway – one instance per table – for all rows
  • Row Data gateway – one instance per row
  • ActiveRecord – one per row or view, contains domain logic too
  • Data Mapper – maps between business objects and stored rows

Object relation behaviour

  • Unit of work – contains set of object affect by common operation or transaction to persist them correctly
  • Identity map – keeps any object loaded from database only once
  • Lazy load – loads a data when is really required/used

Structural

  • Identity field – uniq identifier of a row
  • Foreign key mapping
  • Association table mapping
  • Embedded value
  • Inheritance models – single table (all classes within the same table), class table (each has own table), concrete table (only concrete classes have own tables)
  • Query object
  • Repository

WEB representational

  • MVC
  • Page controller
  • Front controller
  • Template view
  • Transform view
  • Application controller – dedicated object that manages flow of application usage
  • Distribution patterns
  • Remote facade
  • Data Transfer Object

Concurrent access

  • Optimistic lock – versioning and conflicts handling
  • Pessimistic lock – prevents conflicts
  • Coarse-grained lock
  • Implicit lock

Session – client/server/database

Enterprise Integration Patterns extract

Basic:

  • Message
  • Message channel
  • Pipes and filters
  • Message routers
  • Message translator
  • Message Endpoint

Channels

  • Point-To-Point
  • Publish-Subscribe
  • Datatype channel – for particular type of message
  • Invalid message channel – used if a receiver cant proceed the message obtained
  • Dead letter channel – if a message could not be delivered
  • Guaranteed delivery – persist message on both ends of channel
  • Channel adapter – message endpoint
  • Message bridge – connection between different messaging systems
  • Message bus – message queue

Construction

  • Command message – command to execute, invoke a procedure
  • Document message – message with data
  • Event message – message with changed state
  • Request-Replay – two message through two different channels
  • Return Address – message contains a channel for replay
  • Correlation identifier – ID obtained at response message to match the request method sent
  • Message sequence – sequence identifier, position identifier, size or end indicator
  • Message expiration
  • Format indicator

Routing

  • Content based router
  • Message filter
  • Dynamic router
  • Splitter+Aggregator
  • Resequencer – reorder a sequence of messages
  • Scatter-Gather
  • Message broker – contain a logic of message flow

Transformation

  • Envelop
  • Content enricher / filter
  • Normalizer – cast all message to common format

Endpoints

  • Gateway – connects application and messaging system
  • Mapper – maps messages and business objects
  • Poling consumer
  • Competing consumers – a few consumers are listening the same channel, but only one will obtain each message

System management

  • Control Bus – channel that delivers command/controlling messages to all parties to manage their work
  • Detour
  • Wire tape – T-component, to log messages out
  • Message history – message is appended by information of each component passed
  • Message Store – all messages are stored at some database
  • Test message
  • Channel purger

Sale’s notes for softskills

  • Be enthusiastic, act enthusiastic
  • Make records and statistics
  • Train public speaking
  • Plan your activity ahead
  • Find what someone wants and give it to him
  • Help someone to recognize what he wants, and then give it to him
  • There are always two reasons to do something – exposed one and real one
  • Ask “why” question deeper and deeper
  • Tell the truth
  • Keep learning
  • Be respectful regarding your competitors
  • Look good
  • Be a good listener
  • Express your thoughts short
  • Do an appointment
  • Do  not afraid to fail

DDD extract

Model and scenarios first before coding, coding just implementation of model, focus on business, not on database or technologies

Model based language

Model description – UML, text, anything

Layered architecture – UI, application, domain, infrastructure

Bounded context – common ubiquitous (with domain experts) language, all terms are uniq in scope of the bounded context, if there are two meaning of the same form – this definitely means that they belong to two different bounded contexts

Ubiquitous language (best implemented by BDD – given/when/then)

There will be design anyway, good (common for everyone, verified by domain experts) or bad (mix of multiple designs from multiple developers)

Context mapping – map of interconnected bound contexts.

Kinds:

  • partnership
  • shared kernel (try to avoid this)
  • customer-supplier
  • anticorruption layer
  • open host service

Entities and value objects – JPA

Modules

Aggregate – facade, root object that hides and manipulates underlying set/tree of child objects (entities)

Factory – creates new objects

Repository – stores (persists) already created objects

Services – stateless functions

How to improve the model:

  1. Listen domain experts, their language
  2. Build common language

Event sourcing (storing domain events, that represent changes at model, then calculate the current state when needed by merging these events)

CQRS