Se ha denunciado esta presentación.
Utilizamos tu perfil de LinkedIn y tus datos de actividad para personalizar los anuncios y mostrarte publicidad más relevante. Puedes cambiar tus preferencias de publicidad en cualquier momento.

Arinda oktaviana 11353204810 vii lokal g

Arinda oktaviana 11353204810

  • Inicia sesión para ver los comentarios

  • Sé el primero en recomendar esto

Arinda oktaviana 11353204810 vii lokal g

  1. 1. Chapter 4: Security Part II: Auditing Database Systems OLEH : ARINDA OKTAVIANA (11353204810)
  2. 2. Learning Objectives • Understand the operational problems inherent in the flat-file approach to data management that gave rise to the database approach. • Understand the relationships among the fundamental component's of the database concept. • Recognize the defining characteristics of three database models: hierarchical, network, and relational. • Understand the operational features and associated risks of deploying centralized, partitioned, and replicated database models in the DDP environment. • Be familiar with the audit objectives and procedures used to test data management controls.
  3. 3. Flat files are data files that contain records with no structured relationships to other files. The flat-file approach is most often associated with so-called legacy systems. The flat-file environment promotes a single-user view approach to data management whereby end users own their data files rather than share them with other users Data redundancy- replication of essentially the same data in multiple files. It contributes to three significant problems in the flat-file environment: • Data storage • Data updating and • Currency of information Task-data dependency user’s inability to obtain additional information as his or her needs change Flat-File Approach
  4. 4. Flat-File Model
  5. 5. This replication of essentially the same data in multiple files is called data redundancy and contributes to three significant problems in the flat-file environment: data storage, data updating, and currency of information. These and a fourth problem (not specifically caused by data redundancy) called task-data dependency are discussed next. • Data Storage Efficient data management captures and stores data only once and makes this single source available to all users who need it. In the flat-file environment, this is not possible. To meet the private data needs of diverse users, organizations must incur the costs of both multiple collection and multiple storage procedures. Some commonly used data may be duplicated dozens, hundreds, or even thousands of times within an organization. • Data Updating Organizations store a great deal of data on master files and reference files that require periodic updating to reflect changes. For example, a change to a customer’s name or address must be reflected in the appropriate master files. When users keep separate and exclusive files, each change must be made separately for each user. These redundant updating tasks add significantly to the cost of data management.
  6. 6. • Task-Data Dependency Another problem with the flat-file approach is the user’s inability to obtain additional information as his or her needs change: this is known as task-data dependency. In other words, a user’s task is limited and decision making ability constrained by the data that heor she possesses and controls. Since users in a flat- file environment act independently, rather than as members of a user community, establishing a mechanism for the formal data sharing is difficult or impossible. Therefore, users in this environment tend to satisfy new information needs by procuring new data files. This takes time, inhibits per- formance, adds to data redundancy, and drives data management costs even higher. An organization can overcome the problems associated with flat files by implement-ing the database approach. The key features of this data management model are dis-cussed next.
  7. 7. Database Approach • Access to the data resource is controlled by a database management system (DBMS). • Centralizes organization’s data into a common database shared by the user community. • All users have access to data they need which may overcome flat- file problems. • Elimination of data storage problem: No data redundancy. • Elimination of data updating problem: Single update procedure eliminates currency of information problem. • Elimination of task-data dependency problem: Users only constrained by legitimacy of access needs.
  8. 8. Database Model • Elimination of Data Update Problem Because each data element exists in only one place, it requires only a single update pro-cedure. This reduces the time and cost of keeping the database current. • Elimination of Currency Problem A single change to a database attribute is automatically made available to all users of the attribute. For example, a customer address change entered by the billing clerk is imme-diately reflected in the marketing and product services views. • Elimination of Task-Data Dependency Problem The most striking difference between the database model and the flat-file model is the pooling of data into a common database that is shared by all organizational users. With access to the full domain of entity data, changes in user information needs can be satis-fied without obtaining additional private data sets. Users are constrained only by the lim-itations of the data available to the entity and the legitimacy of their need to access them. Therefore the database method eliminates the limited access that flat files, by their na-ture, dictate to users.
  9. 9. Elements of the Database Concept
  10. 10. DBMS Features and Data Definition Language • Program Development – Applications may be created by programmers and end users. • Backup and Recovery - Copies made during processing. • Database Usage Reporting - Captures statistics on database usage (who, when, etc.). • Database Access - Authorizes access to sections of the database. • Data definition language used to define the database to the DBMS on three levels (views).
  11. 11. Database Views • Internal view/ Physical view: Physical arrangement of records in the database.  Describes structures of data records, linkage between files and physical arrangement and sequence of records in a file. Only one internal view. • Conceptual view/ Logical view (schema): Describes the entire database logically and abstractly rather than physically. Only one conceptual view. • External view/ User view (subschema): Portion of database each user views. May be many distinct users. Data Manipulation Language (DML) • DML is the proprietary programming language that a particular DBMS uses to retrieve, process, and store data to / from the database. • Entire user programs may be written in the DML, or selected DML commands can be inserted into universal programs, such as COBOL and FORTRAN. • Can be used to ‘patch’ third party applications to the DBMS
  12. 12. Overview of DBMS Operation
  13. 13. Informal Access: Query Language • Query is an ad hoc access methodology for extracting information from a database. – Users can access data via direct query which requires no formal application programs. • IBM’s Structured Query Language (SQL) has emerged as the standard query language. • Query feature enhances ability to deal with problems that pop-up but poses an important control issue. – Must ensure it is not used for unauthorized database access.
  14. 14. Functions of the Database Administrator (DBA)
  15. 15. Organizational Interaction of the DBA
  16. 16. The Physical Database • Lowest level and only one in physical form. • Magnetic sports on metallic coated disks that create a logical collection of files and records. • Data structures are bricks and mortar of database. – Allows records to be located, stored, and retrieved. – Two components: organization and access methods. • The organization of a file refers to way records are physically arranged on the storage device - either sequential or random. • Access methods are programs used to locate records and to navigate through the database.
  17. 17. Database Terminology • Entity: Anything organization wants to capture data about. • Record Type: Physical database representation of an entity. • Occurrence: Related to the number of records of represented by a particular record type. • Attributes: Defines entities with values that vary (i.e. each employee has a different name). • Database: Set of record types that an organization needs to support its business processes.
  18. 18. Database Terminology • Entity: Anything organization wants to capture data about. • Record Type: Physical database representation of an entity. • Occurrence: Related to the number of records of represented by a particular record type. • Attributes: Defines entities with values that vary (i.e. each employee has a different name). • Database: Set of record types that an organization needs to support its business processes.
  19. 19. Associations • Record types that constitute a database exist in relation to other record types. Three basic record association: • One-to-one: For every occurrence of Record Type X there is one (or zero) of Record Type Y. • One-to-many: For every occurrence of Record Type X, there are zero, one or many occurrences of Record Type Y. • Many-to-many: For every occurrence of Record Types X and Y, there are zero, one or many occurrences of Record Types Y and X, respectively.
  20. 20. Record Associations
  21. 21. The Hierarchical Model • Basis of earliest DBAs and still in use today. • Sets that describe relationship between two linked files. • Each set contains a parent and a child. • Files at the same level with the same parent are siblings. • Tree structure with the highest level in the tree being the root segment and the lowest file in a branch the leaf. • Also called a navigational database. • Usefulness of model is limited because no child record can have more than one parent which leads to data redundancy.
  22. 22. Hierarchical Data Model
  23. 23. The Network Model
  24. 24. The Relational Model • Difference between this and navigational models is the way data associations are represented to the user. • Relational model portrays data in two-dimensional tables with attributes across the top forming columns. • Intersecting columns to form rows are tuples which are normalized arrays of data similar to records in a flat-file system. • Relations are formed by an attribute common to both tables in the relation.
  25. 25. Data Integration in the Relational Model
  26. 26. Centralized Databases in a Distributed Environment • Data retained in a central location. • Remote IT units send requests to central site which processes requests and transmits data back to the requesting IT units. • Actual processing of performed at remote IT unit. • Objective of database approach it to maintain data currency with can be challenging. • During processing, account balances pass through a state of temporary inconsistency where values are incorrect. • Database lockout procedures prevent multiple simultaneous access to data preventing potential corruption.
  27. 27. Distributed Databases: Partitioned Databases • Splits central database into segments distributed to their primary users. • Advantages: • Users’ control increased by having data stored at local sites. • Improved transaction processing response time. • Volume of transmitted data between IT units is reduced. • Reduces potential data loss from a disaster. • Works best for organizations that require minimal data sharing among units.
  28. 28. The Deadlock Phenomenon • Occurs when multiple sites lock each other out of the database, preventing each from processing its transactions. • Transactions in a “wait” state until locks removed. • Can result in transactions being incompletely processed and database being corrupted. • Deadlock is a permanent condition that must be resolved with special software that analyzes and resolve conflicts. • Usually involves terminating one or more transactions to complete processing of the other in deadlock. • Preempted transactions must be reinitiated.
  29. 29. The Deadlock Condition
  30. 30. Distributed Databases: Replicated Databases • Effective for situations with a high degree of data sharing, but no primary user. • Common data replicated at each site, reducing data traffic between sites. • Primary justification to support read-only queries. • Problem is maintaining current versions of database at each site. • Since each IT unit processes its own transactions, common data replicated at each site affected by different transactions and reflect different values.
  31. 31. Concurrency Control • Database concurrency is the presence of complete and accurate data at all user sites. • Designers need to employ methods to ensure transactions processed at each site are accurately reflected in the databases of all the other sites. • Commonly used method is to serialize transactions which involves labeling each transaction by two criteria: • Special software groups transactions into classes to identify potential conflicts. • Second part of control is to time-stamp each transaction.
  32. 32. Database Distribution Methods and the Accountant • Many issues and trade-offs in distributing databases. • Basic questions to be addressed: • Centralized or distributed data? • If distributed, replicated or partitioned? • If replicated, total or partial replication? • If partitioned, what is the allocation of the data segments among the sites? • Choices impact organization’s ability to maintain database integrity, preserve audit trails, and have accurate records.
  33. 33. Controlling and Auditing Data Management Systems • Controls over data management systems fall into two categories. • Access controls are designed to prevent unauthorized individuals from viewing, retrieving, corrupting or destroying data. • Backup controls ensure tat the organization can recover its database in the event of data loss.
  34. 34. Access Controls • User views (subschema) is a subset of the database that defines user’s data domain and access. • Database authorization table contains rules that limit user actions. • User-defined procedures allow users to create a personal security program or routine . • Data encryption procedures protect sensitive data. • Biometric devices such as fingerprints or retina prints control access to the database. • Inference controls should prevent users from inferring, through query options, specific data values they are unauthorized to access.
  35. 35. Subschema Restricting Access
  36. 36. Audit Procedures for Testing Database Access Controls • Verify DBA personnel retain responsibility for authority tables and designing user views. • Select a sample of users and verify access privileges are consistent with job description. • Evaluate cost and benefits of biometric controls. • Verify database query controls to prevent unauthorized access via inference. • Verify sensitive data are properly encrypted.
  37. 37. Backup Controls in the Database Environment • Since data sharing is a fundamental objective of the database approach, environment is vulnerable to damage from individual users. • Four needed backup and recovery features: • Backup feature makes a periodic backup of entire database which is stored in a secure, remote location. • Transaction log provides an audit trail of all processed transactions. • Checkpoint facility suspends all processing while system reconciles transaction log and database change log against the database. • Recovery module uses logs and backup files to restart the system after a failure.
  38. 38. Backup of Direct Access Files
  39. 39. Audit Procedures for Testing Database Access Controls • Verify backups are performed routinely and frequently. • Backup policy should balance inconvenience of frequent activity against business disruption caused by system failure. • Verify that automatic backup procedures are in place and functioning and that copies of the database are stored off-site.
  40. 40. Cobit 4.1 Excerpt EXECUTIVE OVERVIEW For many enterprises, information and the technology that supports it represent their most valuable, but often least understood, assets. Successful enterprises recognise the benefits of information technology and use it to drive their stakeholders’ value. These enterprises also understand and manage the associated risks, such as increasing regulatory compliance and critical dependence of many business processes on information technology (IT).
  41. 41. COBIT’s General Acceptability COBIT is based on the analysis and harmonisation of existing IT standards and good practices and conforms to generally accepted governance principles. It is positioned at a high level, driven by business requirements, covers the full range of IT activities, and concentrates on what should be achieved rather than how to achieve effective governance, management and control. Therefore, it acts as an integrator of IT governance practices and appeals to executive management; business and IT management; governance, assurance and security professionals; and IT audit and control professionals. It is designed to be complementary to, and used together with, other standards and good practices. Implementation of good practices should be consistent with the enterprise’s governance and control framework, appropriate for the organisation, and integrated with other methods and practices that are being used. Standards and good practices are not a panacea.
  42. 42. Their effectiveness depends on how they have been implemented and kept up to date. They are most useful when applied as a set of principles and as a starting point for tailoring specific procedures. To avoid practices becoming shelfware, management and staff should understand what to do, how to do it and why it is important. To achieve alignment of good practice to business requirements, it is recommended that COBIT be used at the highest level, providing an overall control framework based on an IT process model that should generically suit every enterprise. Specific practices and standards covering discrete areas can be mapped up to the COBIT framework, thus providing a hierarchy of guidance materials. COBIT appeals to different users: • Executive management—To obtain value from IT investments and balance risk and control investment in an often unpredictable IT environment
  43. 43. • Business management—To obtain assurance on the management and control of IT services provided by internal or third parties • IT management—To provide the IT services that the business requires to support the business strategy in a controlled and managed way • Auditors—To substantiate their opinions and/or provide advice to management on internal controls COBIT has been developed and is maintained by an independent, not- for-profit research institute, drawing on the expertise of its affiliated association’s members, industry experts, and control and security professionals. Its content is based on ongoing research into IT good practice and is continuously maintained, providing an objective and practical resource for all types of users.
  44. 44. SEKIAN DAN TERIMAKASIHWassalamualaikum Wr. Wb…

×