Let us delve upon the various skill levels or knowledge levels for the testing industry being designated as K-Levels.
What are K-Levels of knowledge?
K-Levels or “Knowledge Levels” basically refers to the prescription of an upper limit of skills or knowledge essential for a particular certification.
Hierarchy of K-Levels is described in globally recognized Bloom’s Texonomy of learning. Reaching a particular K-Level means that the individual has successfully achieved some measurable & meaningful objectives.
Let us delve upon the various skill levels or knowledge levels for the testing industry being designated as K-Levels.
What are K-Levels of knowledge?
K-Levels or “Knowledge Levels” basically refers to the prescription of an upper limit of skills or knowledge essential for a particular certification.
Hierarchy of K-Levels is described in globally recognized Bloom’s Texonomy of learning. Reaching a particular K-Level means that the individual has successfully achieved some measurable & meaningful objectives.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
All About Performance Testing
1. All About Performance Testing – The Best Acceptance Criteria
First of all, let us see what is the meaning of the term “Performance Testing”:
For general engineering practice, “Performance Testing” refers to evaluation & measurement of functional
characteristics of an individual, a system, a product or any material.
However in software industry parlance, the term “Performance Testing” widely refers to the evaluation &
measurement of functional effectiveness of a software system or a component, as regards its reliability,
scalability, efficiency, interoperability & its stability under load.
These days a new science by the name “Performance Engineering” is emerging in IT industry &
Performance Testing / Acceptance Testing are being viewed as its subsets. The performance engineering
lays prime emphasis on covering the performance aspects in the system design itself i.e. right from the
beginning & more important is that well before the start of actual coding.
Why Software Industry lays so much emphasis on Performance Testing:
The key reasons are:
1) Performance has become the key indicator of product quality and acceptance consideration nowadays
in a highly dynamic & competitive market.
2) Customers are becoming extremely demanding on quality front & have clear vision of their
performance objectives.
3) These days, every customer is looking for greater speed, scalability, reliability, efficiency & endurance
of all applications – may it be multi tier applications, web based applications or client server applications
etc. etc.
4) Greater need for identifying & eliminating the performance inhibiting factors early during the
development cycle. It is best to initiate the performance testing efforts right from the beginning of the
development project & these remain active till final deployment.
What are the objectives of Performance Testing?
1) To carry out root cause analysis of performance related common & uncommon problems & devise
plans to tackle them.
2) To reduce the response time of the application with minimal investment on hardware.
3) To identify the problems causing the malfunctioning of the system & fix them well before the production
run. Problems remedied during later stages of production have high cost tags attached to them.
4) Benchmarking of the applications, with a view to refine the company’s strategy towards software
acquisition for the next time.
5) To ensure that the new system conforms to the specified performance criteria.
6) To draw a comparison among performance of two or more systems.
Typical Structure of a Performance Testing Model:
2. Step-1: Collection of Requirements – The most important step & the backbone of performance test model
Step-2: System Study.
Step-3: Design of Testing Strategies – Can include the following.
a) Preparation of traversal documents.
b) Scripting Work.
c) Setting up of test environment.
d) Deployment of monitors.
Step-4: Test Runs can cover the following
a) Baseline Test Run
b) Enhancement Test Run
c) Diagnostic Test Run
Step-5: Analysis & preparation of an interim report.
Step-6: Implementation of recommendations from step-5.
Step-7: Preparation of a Finalized Report.
Attributes of a Good Performance Testing setup:
1) Availability of a performance baseline document detailing the present performance of the system &
acting as an effective baseline, which can be used in regression testing. This baseline document can be
conveniently used to compare the expectations when the system conditions happen to change.
2) Performance test beds & test environment should be separate & must replicate the live production
environment as far as possible.
3) Performance testing environment should not be coupled with the development environment.
4) Resources leading to fulfillment of objectives like:
# Deployment of personnel with sound knowledge
# Systematic & deliberate planning
# Study of existing infrastructure
# Proper preparation
# Systematic execution
# Scientific analysis
# Effective reporting
However these days many companies have started doing part of the testing under the live environment,
This helps them in establishing points of differences experienced during test & live systems.
How to gear up for Performance Testing?
1) Define the performance conditions: First of all we need to define performance conditions related to
functional requirements like speed, accuracy & consumption of resources. Resources can be like memory
requirements, storage space requirements & bandwidth of the communication system etc. etc.
3. 2) Study the operational profile: The operational profile contains details of usage patterns and
environment of the live system. It includes description of the period of operation, the operating
environment, quantum of loads & expected transactions etc. When exact data is not available, the data
from the testing profiles can be approximated especially when testing is not being done under the live
environment.
3) Prepare good performance test cases: While designing performance test cases, our endeavor must
be to
a) Understand the present performance levels & to use this information for benchmarking at a later date.
b) Evaluate the performance requirements of the system against the specified norms.
c) Clearly specify the system inputs and the expected outputs, when the system is subjected to the
defined load conditions like profile of the test, test environment & the test duration etc.
Ways of doing Performance Testing:
Conventionally there are two methods of performance testing like
1) Manual performance testing
2) Automated performance testing
1) Manual Performance Testing: In order to develop an adequate confidence, the response times being
a good indicator of performance of a transaction must be measured several times during the test. Use of
stopwatches monitored by many persons is one of the oldest & effective way to measure the test
performance. Depending upon the available infrastructure, other means can also be devised.
2) Automated Performance Testing: Many approaches can be practiced here. We can use the
automation software which can simulate the users actions & can simultaneously record the response
times & various system parameters like access of storage discs, usage of memory & queue length for
various messages etc. etc.
We can provide additional data load over the system, through many utility programs, message replication
programs, batch files & many protocols analyzing tools etc.
Important Considerations for Designing Good Performance Test Cases:
1) Stress: To take care of the ability of a system or its component to move beyond the specified limits of
performance requirements.
2) Capacity: To cover the maximum amounts which can be contained, or produced, or completely fully
occupy the entity.
3) Efficiency: To take care of the desired efficiency measured as the ratio of volume of data processed to
the amount of resources consumed for the particular processing.
4) Response time: To take care of the specified requirements of response time i.e. the total time elapsed
between the event of initiation of request to the receipt of response.
5) Reliability: Must be able to deliver the expected results with sufficient consistency.
4. 6) Bandwidth: Must be able to measure & evaluate the bandwidth requirements i.e. the amount of data
passing across the system.
7) Security: Must be able to evaluate the user confidentiality, access permissions & data integrity
considerations in the system.
8) Recovery: Must be able to subject the system under test to higher loads, and measure the time it
takes to the normal situation after withdrawal of loads.
9) Scalability: Must be able to handle more loads by the addition of more hardware elements
components without any coding change.
Lessons learnt:
Performance engineering approach encompassing load testing, stress testing or endurance testing is
extremely important acceptance consideration in today’s highly competitive market with highly demanding
& quality conscious customers.
Read many more exciting articles at:
http://www.softwaretestinggenius.com