SlideShare una empresa de Scribd logo
1 de 75
Descargar para leer sin conexión
Decisionmaking is a process by
 which "best solutions‖ are
 found.

Managers  are the one who make
 the decisions to lead to the best
 outcome possible under a given
 circumstances.
Itis an act, process or
 methodology of making
 something fully perfect,
 functional or as effective as
 possible.
Case  of a MANAGER
- A manager always find the level
of output that maximizes the
profit of the firm or to determine
how much labor, capital and raw
material inputs to use to produce
a given amount of output at the
lowest possible cost.
Case   of Consumers

- As consumers, they will search goods
within the constraints imposed by
their prices and their income, for the
combination of goods and services that
will yield the highest level of
satisfaction.
The  function the decision maker
  seeks to maximize or minimize
Examples:
1. Manager – will always try to
    maximize profit.
2. Consumer – will always
    maximize consumer goods.
-   Optimization problem that involves
    maximizing/minimizing the objective
    function.
- When the Objective function
measures a benefit, the decision
maker seeks to maximize this benefit
thus solving a maximization problem.

- When the Objective function
measures a cost, the decision maker
seeks to minimize the cost, thus
solving a minimizing problem.
 Determines   the objective function.

Example:
- Profit
The value of profit will be determined by the
number of units sold or produced while the
production of unit of the good is the activity or
choice variable that determines the value of the
objective function which is profit.
 ObjectiveFunction- Measures whatever it is
 that the particular decision maker wishes to
 either maximize or minimize.

 E.g.   profit, cost, satisfaction…

 Maximization Problem- optimizing problem
 that involves maximizing the objective
 function
 DiscreteChoice Variable- choice Variable
 that can only take a specific integer

 ContiniousChoice Variable- choice variable
 that can take on any value between two end
 point.
 Minimization Problem- optimizing problem
 that involves minimizing the objective
 function

 Activities
           or Choice variables- Determine the
 value of Objective function.

 Objectivefunction maybe a function of more
 than one activity
 Unconstrained optimization- an optimization
 problem wherein the decision maker can
 choose any level of activity from unrestricted
 set of values.

 E.g.no external restrictions inchoosing any
 level of output in order to maximize net
 benefit.
 Constrained Optimization- Optimization
 problems wherein the decision maker can
 choose values for choice variables from a
 restricted set of values

 Constrained  maximization- maximization
 problem where activities must be chose to
 satisfy a side constraint that the total cost of
 activities be held to specific amount

 Total benefit function=objective function
 Total cost= constraint
 Constrained  Minimization- minimization
 problem where the activities must be chosen
 to satisfy a side constraint that the total
 benefit of the activities be held to specific
 amount.

 Objective function= Total cost function
 Total benefit function= constraint


 E.g.   gift shop
 MarginalAnalysis – analytical tool for solving
 optimization problems that involves changing
 the value of choice variable by a small
 amount to see if the objective function can
 be further increased or further decreased
 Unconstrained   Maximization
 NB= TB-TC
 NB=net benefit
 TB=total benefit
 TC=total cost


 The activity is increased or decreased in
  order to obtain highest level of benefit
 The optimal level of activity is obtained
  when no further increase in net benefit are
  possible in any change of activity
 MarginalBenefit- addition to total benefit
 attribute to increasing the activity by a small
 amount.

 MarginalCost- addition to total cost attribute
 to increasing the activity by a small amount
MB= Change in total benefit
     Change in activity

MC= Change in total cost
     Change in activity

                           MB>MC MB<MC
Increase activity          NB rises NB falls
Decrease activity          NB falls NB rises

Optimal level of the activity is attained-net benefit is
  maximized-when level of activity is the last level for which
  marginal benefit exceeds marginal cost
   Maximization with a Continuous Choice variable

    When a decision maker wishes to obtain the
    maximum net benefit from an activity that is
    continuously variable, the optimal level of the
    activity is that level at which the marginal
    benefit is equal to marginal cost (MB=MC)
Constrained Optimization

-   An objective function is maximized or minimized
    subject to a constraint if, for all of the activities
    in the objective function, the ratios of marginal
    benefit per dollar spent be equal for all
    activites.

-   MBA/PA=MBB/PB
 Thischapter set forth the basic principles of
 regression    analysis:    estimation     and
 assessment of statistical significance. We
 emphasized how to interpret the results of
 regression analysis, rather than focusing on
 the mathematics of regression analysis.
 The   coefficients in an equation that
  determine the exact mathematical relation
  among variables.
 Y being the dependent variable and X the
  independent or the explanatory variable.
 Itis the process of finding estimates of the
  numerical values of the parameters of
  equation.
 Thetwo variable linear model or the simple
 regression analyisis is used for testing
 hypothesis using the Y variable or the
 independent variable and X variable or the
 explanatory varible.
YEAR    n    Yi(Corn)   Xi(fertilizer)   yi    xi    xiyi   xi2


 1971    1      40             6          -17   -12   204    144
 1972    2      44            10          -13   -8    104    64
 1973    3      46            12          -11   -6    66     36
 1974    4      48            14          -9    -4    36     16
 1975    5      52            16          -5    -2    10      4
 1976    6      58            18          1     0      0      0
 1977    7      60            22          3     4     12     16
 1978    8      68            24          11    6     66     36
 1979    9      74            26          17    8     136    64
 1980    10     80            32          23    14    322    196
Total:   10     570          180          0     0     956    576
mean:           57            18
YEAR   n    Yi(Corn)   Xi(fertilizer)


1971   1      40             6
1972   2      44            10
1973   3      46            12
1974   4      48            14
1975   5      52            16
1976   6      58            18
1977   7      60            22
1978   8      68            24
1979   9      74            26
1980   10     80            32
YEAR   n    Yi(Corn)   Xi(fertilizer)

         1971   1      40             6

         1972   2      44            10

         1973   3      46            12

         1974   4      48            14

         1975   5      52            16

         1976   6      58            18

         1977   7      60            22

         1978   8      68            24

         1979   9      74            26

         1980   10     80            32

Total:          10     570          180

mean:                  57            18
YEAR   n    Yi(Corn)   Xi(fertilizer)   yi
         1971   1      40             6          -17
         1972   2      44            10
         1973   3      46            12
         1974   4      48            14
         1975   5      52            16
         1976   6      58            18
         1977   7      60            22
         1978   8      68            24
         1979   9      74            26
         1980   10     80            32
Total:          10     570          180
mean:                  57            18
YEAR   n    Yi(Corn)   Xi(fertilizer)   xi


         1971   1      40             6          -12
         1972   2      44            10
         1973   3      46            12
         1974   4      48            14
         1975   5      52            16
         1976   6      58            18
         1977   7      60            22
         1978   8      68            24
         1979   9      74            26
         1980   10     80            32
Total:          10     570          180
mean:                  57            18
YEAR     n    Yi(Corn)   Xi(fertilizer)   yi    xi    xiyi

 1971     1      40           6           -17   -12   204
 1972     2      44           10          -13    -8
 1973     3      46           12          -11    -6
 1974     4      48           14           -9    -4
 1975     5      52           16           -5    -2
 1976     6      58           18            1     0
 1977     7      60           22            3     4
 1978     8      68           24          11      6
 1979     9      74           26          17      8
 1980    10      80           32          23    14
Total:   10     570          180            0     0
mean:            57           18
YEAR     n    Yi(Corn) Xi(fertilizer)   yi    xi    xi2

 1971     1      40          6          -17   -12   144
 1972     2      44         10          -13    -8   64
 1973     3      46         12          -11    -6   36
 1974     4      48         14           -9    -4   16
 1975     5      52         16           -5    -2    4
 1976     6      58         18            1     0    0
 1977     7      60         22            3     4   16
 1978     8      68         24          11      6   36
 1979     9      74         26          17      8   64
 1980    10      80         32          23    14    196
Total:   10     570         180           0     0   576
mean:            57         18                       xi2
b1 (slope of the estimated   = XiYi / Xi2
regression line) = 1.66
b0 (Y intercept) = 27.13     = mean of Yi - (bi * mean of
Ŷi (estimated Regression     Xi )
equation)                    = 27.12 + 1.66Xi
90


80


70


60


50
                                                                                      Yi
40                                                                                    Xi


30


20


10


 0
  1970   1971   1972   1973   1974   1975   1976   1977   1978   1979   1980   1981
Coeffi Standar t Stat  P-   Lowe Upper Lower Upper
            cients d Error        value r 95% 95% 95.0% 95.0%
Intercept   27.13   1.9792653 13.70457 7.74557E 22.5608 31.68919 22.56080 31.689194
                        48       984      -07    0593      407      593       07




X            1.66   0.1013210 16.38081 1.94353E 1.42607 1.893369 1.426075 1.8933690
                        87       745      -07     5378     067      378       67
Variable
1
Regression Statistics

Multiple R                               0.985418303

R Square                                 0.971049232

Adjusted R Square                        0.967430386

Standard Error                           2.431706077

Observations                                     10
ANOVA

           d     SS          MS           F        Significance F
           f


Regression 1 1586.694444 1586.694444 268.3311803   1.94353E-07

Residual   8 47.30555556 5.913194444

Total      9    1634
   Tcomp is greater than the Tcrit (16.38081745>1.860).
    Since that is the case then X variable is significant
    with the margin of error given, which is 5%, to
    explicate the relationship between X and Y
                        parameters.

    • R2 or the explanatory power of the model is equal
    to 0.9710 or 97.10%. This explains that fertilizer (X)
    expresses 97.10% of output change in Corn . The R2 is
    significantly      different       from          zero.

    • In F distribution, the Fcomp explains that the
    parameters are not all equal to zero. The high value
    of F ratio implies a significant relationship between
    the dependent and independent variables.
 The  test of significance of parameter
 estimates passed as well as the test for the
 coefficient of multiple determination and
 test of the overall significance of the
 regression.
Population Regression Line    Sample Regression Line

 The equation or line        The line that best     fits
  representing    the          the data in          the
  true or (actual)             sample is call       the
  relation   between           sample regression   line
  dependent variable
  and the explanatory
  Variable
 Anestimator that produces estimates of a
 parameter that are on average equal to the
 true value of the parameter
 Thedistribution (and relative frequency) of
 values b can take because observations on Y
 and X come from a random sample
 The estimated coefficient is far enough away
  from zero
 Either sufficiently greater than zero (a
  positive estimate) or sufficiently less than
  zero (a negative estimate)
 t-stat is used to test the hypothesis that the
  true value of b equals zero
 If the t-stat is greater than the critical value
  of t, then the hypothesis that b=0 is rejected
  in favor of the alternative hypothesis that
  b not =0
 When the calculated t-stat exceeds the
  critical value of t, b is significantly different
  from zero, or equivalently, b is statistically
  significant
       Using P-value
          We use P-value for analyzing data and to make the strongest possible
           conclusion from the limited data’s that are given.
          To get the p-value, you need to have the estimated value then the
           significance level of the alternative hypothesis then the test statistics.

    Decision Criterion for a Hypothesis Test Using the P-value:

    If P-value is less than a, reject the null hypothesis; otherwise, fail to reject the null hypothesis.
    Examples:
    Ha: µ 30 versus Ho: µ = 30

    Assumptions: X is normally distributed with s = 8
            Test Statistic:
    a = .05 RR: z < -1.96 or z >1.96 (P-value < .05)

    Calculation: z = 1.54

    P-value = 2P(z > |zcalculated|) = 2P(z > |1.54|) = 2P(z < -1.54)
           = 2(.0618) = .1236

    Decision: Fail to reject Ho.
   Evaluation of Regression Equation
     Regression Equation(y) = a + bx
       Slope(b) = (NΣXY - (ΣX)(ΣY)) / (NΣX2 - (ΣX)2)
       Intercept(a) = (ΣY - b(ΣX)) / N
       where
                x and y are the variables.
                b = The slope of the regression line
                a = The intercept point of the regression line and the y axis.
                N = Number of values or elements
                X = First Score
                Y = Second Score
                ΣXY = Sum of the product of first and Second Scores
                ΣX = Sum of First Scores
                ΣY = Sum of Second Scores
                ΣX2 = Sum of square First Scores
       Regression Example: To find the Simple/Linear Regression of
       X Values 60 61 66 63 65
        Y Values 3.1 3.2 3.8 4 4.1
        To find regression equation, we will first find slope, intercept and use it to form regression
       equation..
        Step 1: Count the number of values.
               N=5
        Step 2: Find XY, X2
Step 3: Find ΣX, ΣY, ΣXY, ΣX2.
        ΣX = 311
        ΣY = 18.6
        ΣXY = 1159.7
        ΣX2 = 19359
 Step 4: Substitute in the above slope formula given.
        Slope(b) = (NΣXY - (ΣX)(ΣY)) / (NΣX2 - (ΣX)2)
        = ((5)*(1159.7)-(311)*(18.6))/((5)*(19359)-(311)2)
        = (5798.5 - 5784.6)/(96795 - 96721)
        = 13.9/74
        = 0.19
 Step 5: Now, again substitute in the above intercept formula given.
        Intercept(a) = (ΣY - b(ΣX)) / N
        = (18.6 - 0.19(311))/5
        = (18.6 - 59.09)/5
        = -40.49/5
        = -8.098
 Step 6: Then substitute these values in regression equation formula
        Regression Equation(y) = a + bx
        = -8.098 + 0.19x.
Suppose if we want to know the approximate y value for the variable x = 64. Then we can
substitute the value in the above equation.
       Regression Equation(y) = a + bx
       = -8.098 + 0.19(64).
       = -8.098 + 12.16
       = 4.06
   Coefficient of determination
     It is used for statistical models whose main purpose is to predict the outcome of the
      future based by other related information.
     Measures percentage variation in Y that can be explained by the X’s through the model
      Y=Xβ + ε
     Proportionate reduction of total variation in Y associated with the use of the set of
      independent variables X1, X2, …, Xk (assuming a constant term is included in the
      model)
     A                   goodness                of                fit               measure
       Consider Tampa sales example. From printout, R2 = 0.9453.
    • Interpretation: 94% of the variability observed in sale prices can be
    explained by assessed values of homes. Thus, the assessed value of the
    home contributes a lot of information about the home’s sale price.
    • We can also find the pieces we need to compute R2 by hand in either
    JMP or SAS outputs:
    – SSyy is called Sum of Squares of Model in SAS and JMP
    SSE is called Sum of Squares of Error in both SAS and JMP.
    • In Tampa sales example, SSyy = 1673142, SSE = 96746 and thus
      R2 = 1673142 − 96746/1673142 = 0.94.
   F-test
     F-test is a simultaneous test that if all of the beta=0 (it means that all of
      your x’s are useless) or at least one x is not equal to zero (which means
      that specific variable is affecting Y).

       Ex:. The hypothesis that the means of several normally distributed
        populations, all having the same standard deviation, are equal. This is
        perhaps the best-known F-test, and plays an important role in the
        analysis of variance (ANOVA).

       The hypothesis that a data set in a regression analysis follows the
        simpler of two proposed linear models that are nested within each other.
   Multiple Regression
     It’s purpose is to learn more about the relationship between
      several independent and dependent variables.

       Ex. A car agent having a listing of the following cars in the
        following characteristics of that car—transportation, comfort,
        style, luxury, fuel economy, etc. Once these following
        information has been compiled for the various cars, it will be
        interesting to see whether and how these measures relate to
        the price for which a car is sold. For example, the space of this
        car is better analyst of the price of which a car sells than how
        luxurious the car is.
Quadratic Regression Models
Log-Linear Regression Models
Are used when the underlying relation between
  X and Y plots as a curve, rather than a
  straight line.
 An  analyst would use nonlinear regression
  model when the scatter diagram shows a
  curvilinear pattern.
 Nonlinear regression is a general technique
  to fit a curve through your data.
 The purpose of linear regression is to find the
  line that comes closest to your data.
 Quadratic Regression Model
 Log-Linear Regression Model
 One of the most useful nonlinear forms for
  managerial economics
 expressed as Y = a + bX + cX2
 Nonlinear model to Linear model
 Create a new variable. ―Z‖ defined as Z = X2
 Y= a + bX + cZ
 Run   Regression of Y from X and Z
                                  Dependent Variable:   Y                  F-Ratio:           13.11
                                                                           R-
    Y          X        Z         Observations:             12             squared:            0.75
          82        3         9
         107        3         9                         Parameter           Standard           T-
          61        4        16   Variable              Intercept           Error              Ratio
          77        5        25
          68        6        36   Intercept                         140               17.14      8.17
          30        8        64   X                                  -20               4.14     -4.83
          57       10       100   Z                                 1.01                0.5         2
          40       12       144
          82       14       196
          68       15       225
         102       17       289
         110       18       324
 Estimated   quadratic regression equation is
 Y = 140.08 – 19.51X + 1.01X2
 1.01 is also the slope parameter estimate for
  X2
 The estimated equation can be used to
  estimate the value of Y for any particular
  value of X
   Example: if X = 10
   Y will be equal to 45.98
        Y=140.08 – 19.51(10) + 1.01(10)2

   After which, perform a t-tests to determine
    the statistical significance of each
    parameters.
Y  is related to one or more explanatory variables
  in a multiplicative form
 Y=aXb Zc
 Transform to a linear equation


             Percentage change in Y
   •     b 
             Percentage change in X

             Percentage change in Y
     •   c 
             Percentage change in Z
 Parameters  b and c are elasticities.
 To transform the equation in to a
  linear form, we must use the natural
  logarithms of both sides of the
  equation.

   lnY = (ln a) + b(ln X) + c(ln Z)
 If   we define: Y’ = a’ + bX’ + cZ’
   Example
                Variable:   Y = aXb
   Since Y is positive at all points, parameter a is expected to be positive.
   Since Y is decreasing as X increases, the parameter X(b) is expected to be negative.

     Y          X
         2810        20
         2825        30
         2031        30
         2228        40
         1620        40
         1836        50
         1217        60
         1110        90
         1000       110
          420       120
          602       140
          331       170
   To estimate the parameters a and b in a nonlinear
    equation, we transform the equation by taking
    logarithms:
    lnY = ln a + b lnX
         LOG Y      LOG X
          7.94094     2.99573
          7.94626      3.4012
          7.61628      3.4012
          7.70886     3.68888
          7.39018     3.68888
          7.51534     3.91202
          7.10414     4.09434
          7.01212     4.49981
          6.90776     4.70048
          6.04025     4.78749
          6.40026     4.94164
          5.80212      5.1358
 Run   Regression


  Dependent
   Variable        LOG Y      F-Ratio    70
  Observations       12       R-Square 0.875

                 Parameter    Standard  T-
    Variable      Intercept     Error  Ratio


   Intercept       11.06        0.48    23.04
     Log X         -0.96        0.11    -8.73
   To obtain parameter estimates:
    note: the slope parameter on Log X is also the exponent on X in the
    linear equation.


           Y=aX(-0.96)
    note: since b is an elasticity, 10 percent increase in X results in a 9.6
    percent decrease Y.
   To obtain the estimate of a:
    note: we take the antilog of the estimated value of the intercept
    parameter.
          = antilog(11.06)
          = 63, 576
           Y=63,576X(-0.96)
 Showthat the two models are
 mathematically equivalent.

 logX = 4.5
 logY = 6.74 = [11.06 – 0.96(4.5)]
 take the antilog of Y and X.
 X = 90     Y = 845
 Y = 845 = [63,577(90) (-0.96) ]
 Regression analysis is simply a tool to provide
  the necessary information for a manager to
  make decisions that maximizes profits.

 Itoffers managers a way of estimating the
  functions they need for managerial decision
  making
Reference:

Managerial Economics fifth edition
By Charles Maurice and Christopher R. Thomas

Más contenido relacionado

Similar a Report man econ

André Bationo - Agricultural risks linked to soil, water and climate in Sub-S...
André Bationo - Agricultural risks linked to soil, water and climate in Sub-S...André Bationo - Agricultural risks linked to soil, water and climate in Sub-S...
André Bationo - Agricultural risks linked to soil, water and climate in Sub-S...Global Risk Forum GRFDavos
 
The file with the highly informative name "other data"
The file with the highly informative name "other data"The file with the highly informative name "other data"
The file with the highly informative name "other data"mkalina
 
Public notices for Dec. 29, 2012
Public notices for Dec. 29, 2012Public notices for Dec. 29, 2012
Public notices for Dec. 29, 2012Post-Bulletin Co.
 

Similar a Report man econ (6)

Stats project
Stats projectStats project
Stats project
 
Davos bationo full pc
Davos bationo full pcDavos bationo full pc
Davos bationo full pc
 
André Bationo - Agricultural risks linked to soil, water and climate in Sub-S...
André Bationo - Agricultural risks linked to soil, water and climate in Sub-S...André Bationo - Agricultural risks linked to soil, water and climate in Sub-S...
André Bationo - Agricultural risks linked to soil, water and climate in Sub-S...
 
Doc1
Doc1Doc1
Doc1
 
The file with the highly informative name "other data"
The file with the highly informative name "other data"The file with the highly informative name "other data"
The file with the highly informative name "other data"
 
Public notices for Dec. 29, 2012
Public notices for Dec. 29, 2012Public notices for Dec. 29, 2012
Public notices for Dec. 29, 2012
 

Más de Junjun Santos

Más de Junjun Santos (7)

Vlearning e-rem
Vlearning e-remVlearning e-rem
Vlearning e-rem
 
About stacks
About stacksAbout stacks
About stacks
 
Chap007
Chap007Chap007
Chap007
 
Chap005
Chap005Chap005
Chap005
 
Financial economics new syllabus
Financial economics new syllabusFinancial economics new syllabus
Financial economics new syllabus
 
Vlearning marketing plan
Vlearning marketing planVlearning marketing plan
Vlearning marketing plan
 
Vlearning marketing plan
Vlearning marketing planVlearning marketing plan
Vlearning marketing plan
 

Último

Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embeddingZilliz
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Wonjun Hwang
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfRankYa
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesZilliz
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 

Último (20)

Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embedding
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdf
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector Databases
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 

Report man econ

  • 1.
  • 2. Decisionmaking is a process by which "best solutions‖ are found. Managers are the one who make the decisions to lead to the best outcome possible under a given circumstances.
  • 3. Itis an act, process or methodology of making something fully perfect, functional or as effective as possible.
  • 4. Case of a MANAGER - A manager always find the level of output that maximizes the profit of the firm or to determine how much labor, capital and raw material inputs to use to produce a given amount of output at the lowest possible cost.
  • 5. Case of Consumers - As consumers, they will search goods within the constraints imposed by their prices and their income, for the combination of goods and services that will yield the highest level of satisfaction.
  • 6. The function the decision maker seeks to maximize or minimize Examples: 1. Manager – will always try to maximize profit. 2. Consumer – will always maximize consumer goods.
  • 7. - Optimization problem that involves maximizing/minimizing the objective function.
  • 8. - When the Objective function measures a benefit, the decision maker seeks to maximize this benefit thus solving a maximization problem. - When the Objective function measures a cost, the decision maker seeks to minimize the cost, thus solving a minimizing problem.
  • 9.  Determines the objective function. Example: - Profit The value of profit will be determined by the number of units sold or produced while the production of unit of the good is the activity or choice variable that determines the value of the objective function which is profit.
  • 10.
  • 11.  ObjectiveFunction- Measures whatever it is that the particular decision maker wishes to either maximize or minimize.  E.g. profit, cost, satisfaction…  Maximization Problem- optimizing problem that involves maximizing the objective function
  • 12.  DiscreteChoice Variable- choice Variable that can only take a specific integer  ContiniousChoice Variable- choice variable that can take on any value between two end point.
  • 13.  Minimization Problem- optimizing problem that involves minimizing the objective function  Activities or Choice variables- Determine the value of Objective function.  Objectivefunction maybe a function of more than one activity
  • 14.  Unconstrained optimization- an optimization problem wherein the decision maker can choose any level of activity from unrestricted set of values.  E.g.no external restrictions inchoosing any level of output in order to maximize net benefit.
  • 15.  Constrained Optimization- Optimization problems wherein the decision maker can choose values for choice variables from a restricted set of values  Constrained maximization- maximization problem where activities must be chose to satisfy a side constraint that the total cost of activities be held to specific amount  Total benefit function=objective function  Total cost= constraint
  • 16.  Constrained Minimization- minimization problem where the activities must be chosen to satisfy a side constraint that the total benefit of the activities be held to specific amount.  Objective function= Total cost function  Total benefit function= constraint  E.g. gift shop
  • 17.  MarginalAnalysis – analytical tool for solving optimization problems that involves changing the value of choice variable by a small amount to see if the objective function can be further increased or further decreased
  • 18.  Unconstrained Maximization  NB= TB-TC  NB=net benefit  TB=total benefit  TC=total cost  The activity is increased or decreased in order to obtain highest level of benefit  The optimal level of activity is obtained when no further increase in net benefit are possible in any change of activity
  • 19.  MarginalBenefit- addition to total benefit attribute to increasing the activity by a small amount.  MarginalCost- addition to total cost attribute to increasing the activity by a small amount
  • 20. MB= Change in total benefit Change in activity MC= Change in total cost Change in activity MB>MC MB<MC Increase activity NB rises NB falls Decrease activity NB falls NB rises Optimal level of the activity is attained-net benefit is maximized-when level of activity is the last level for which marginal benefit exceeds marginal cost
  • 21. Maximization with a Continuous Choice variable When a decision maker wishes to obtain the maximum net benefit from an activity that is continuously variable, the optimal level of the activity is that level at which the marginal benefit is equal to marginal cost (MB=MC)
  • 22. Constrained Optimization - An objective function is maximized or minimized subject to a constraint if, for all of the activities in the objective function, the ratios of marginal benefit per dollar spent be equal for all activites. - MBA/PA=MBB/PB
  • 23.
  • 24.  Thischapter set forth the basic principles of regression analysis: estimation and assessment of statistical significance. We emphasized how to interpret the results of regression analysis, rather than focusing on the mathematics of regression analysis.
  • 25.  The coefficients in an equation that determine the exact mathematical relation among variables.  Y being the dependent variable and X the independent or the explanatory variable.
  • 26.  Itis the process of finding estimates of the numerical values of the parameters of equation.
  • 27.  Thetwo variable linear model or the simple regression analyisis is used for testing hypothesis using the Y variable or the independent variable and X variable or the explanatory varible.
  • 28. YEAR n Yi(Corn) Xi(fertilizer) yi xi xiyi xi2 1971 1 40 6 -17 -12 204 144 1972 2 44 10 -13 -8 104 64 1973 3 46 12 -11 -6 66 36 1974 4 48 14 -9 -4 36 16 1975 5 52 16 -5 -2 10 4 1976 6 58 18 1 0 0 0 1977 7 60 22 3 4 12 16 1978 8 68 24 11 6 66 36 1979 9 74 26 17 8 136 64 1980 10 80 32 23 14 322 196 Total: 10 570 180 0 0 956 576 mean: 57 18
  • 29. YEAR n Yi(Corn) Xi(fertilizer) 1971 1 40 6 1972 2 44 10 1973 3 46 12 1974 4 48 14 1975 5 52 16 1976 6 58 18 1977 7 60 22 1978 8 68 24 1979 9 74 26 1980 10 80 32
  • 30. YEAR n Yi(Corn) Xi(fertilizer) 1971 1 40 6 1972 2 44 10 1973 3 46 12 1974 4 48 14 1975 5 52 16 1976 6 58 18 1977 7 60 22 1978 8 68 24 1979 9 74 26 1980 10 80 32 Total: 10 570 180 mean: 57 18
  • 31. YEAR n Yi(Corn) Xi(fertilizer) yi 1971 1 40 6 -17 1972 2 44 10 1973 3 46 12 1974 4 48 14 1975 5 52 16 1976 6 58 18 1977 7 60 22 1978 8 68 24 1979 9 74 26 1980 10 80 32 Total: 10 570 180 mean: 57 18
  • 32. YEAR n Yi(Corn) Xi(fertilizer) xi 1971 1 40 6 -12 1972 2 44 10 1973 3 46 12 1974 4 48 14 1975 5 52 16 1976 6 58 18 1977 7 60 22 1978 8 68 24 1979 9 74 26 1980 10 80 32 Total: 10 570 180 mean: 57 18
  • 33. YEAR n Yi(Corn) Xi(fertilizer) yi xi xiyi 1971 1 40 6 -17 -12 204 1972 2 44 10 -13 -8 1973 3 46 12 -11 -6 1974 4 48 14 -9 -4 1975 5 52 16 -5 -2 1976 6 58 18 1 0 1977 7 60 22 3 4 1978 8 68 24 11 6 1979 9 74 26 17 8 1980 10 80 32 23 14 Total: 10 570 180 0 0 mean: 57 18
  • 34. YEAR n Yi(Corn) Xi(fertilizer) yi xi xi2 1971 1 40 6 -17 -12 144 1972 2 44 10 -13 -8 64 1973 3 46 12 -11 -6 36 1974 4 48 14 -9 -4 16 1975 5 52 16 -5 -2 4 1976 6 58 18 1 0 0 1977 7 60 22 3 4 16 1978 8 68 24 11 6 36 1979 9 74 26 17 8 64 1980 10 80 32 23 14 196 Total: 10 570 180 0 0 576 mean: 57 18 xi2
  • 35. b1 (slope of the estimated = XiYi / Xi2 regression line) = 1.66 b0 (Y intercept) = 27.13 = mean of Yi - (bi * mean of Ŷi (estimated Regression Xi ) equation) = 27.12 + 1.66Xi
  • 36. 90 80 70 60 50 Yi 40 Xi 30 20 10 0 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981
  • 37. Coeffi Standar t Stat P- Lowe Upper Lower Upper cients d Error value r 95% 95% 95.0% 95.0% Intercept 27.13 1.9792653 13.70457 7.74557E 22.5608 31.68919 22.56080 31.689194 48 984 -07 0593 407 593 07 X 1.66 0.1013210 16.38081 1.94353E 1.42607 1.893369 1.426075 1.8933690 87 745 -07 5378 067 378 67 Variable 1
  • 38. Regression Statistics Multiple R 0.985418303 R Square 0.971049232 Adjusted R Square 0.967430386 Standard Error 2.431706077 Observations 10
  • 39. ANOVA d SS MS F Significance F f Regression 1 1586.694444 1586.694444 268.3311803 1.94353E-07 Residual 8 47.30555556 5.913194444 Total 9 1634
  • 40.
  • 41.
  • 42.
  • 43.
  • 44. Tcomp is greater than the Tcrit (16.38081745>1.860). Since that is the case then X variable is significant with the margin of error given, which is 5%, to explicate the relationship between X and Y parameters. • R2 or the explanatory power of the model is equal to 0.9710 or 97.10%. This explains that fertilizer (X) expresses 97.10% of output change in Corn . The R2 is significantly different from zero. • In F distribution, the Fcomp explains that the parameters are not all equal to zero. The high value of F ratio implies a significant relationship between the dependent and independent variables.
  • 45.  The test of significance of parameter estimates passed as well as the test for the coefficient of multiple determination and test of the overall significance of the regression.
  • 46. Population Regression Line Sample Regression Line The equation or line The line that best fits representing the the data in the true or (actual) sample is call the relation between sample regression line dependent variable and the explanatory Variable
  • 47.  Anestimator that produces estimates of a parameter that are on average equal to the true value of the parameter
  • 48.  Thedistribution (and relative frequency) of values b can take because observations on Y and X come from a random sample
  • 49.  The estimated coefficient is far enough away from zero  Either sufficiently greater than zero (a positive estimate) or sufficiently less than zero (a negative estimate)
  • 50.  t-stat is used to test the hypothesis that the true value of b equals zero  If the t-stat is greater than the critical value of t, then the hypothesis that b=0 is rejected in favor of the alternative hypothesis that b not =0  When the calculated t-stat exceeds the critical value of t, b is significantly different from zero, or equivalently, b is statistically significant
  • 51. Using P-value  We use P-value for analyzing data and to make the strongest possible conclusion from the limited data’s that are given.  To get the p-value, you need to have the estimated value then the significance level of the alternative hypothesis then the test statistics. Decision Criterion for a Hypothesis Test Using the P-value: If P-value is less than a, reject the null hypothesis; otherwise, fail to reject the null hypothesis. Examples: Ha: µ 30 versus Ho: µ = 30 Assumptions: X is normally distributed with s = 8 Test Statistic: a = .05 RR: z < -1.96 or z >1.96 (P-value < .05) Calculation: z = 1.54 P-value = 2P(z > |zcalculated|) = 2P(z > |1.54|) = 2P(z < -1.54) = 2(.0618) = .1236 Decision: Fail to reject Ho.
  • 52. Evaluation of Regression Equation  Regression Equation(y) = a + bx Slope(b) = (NΣXY - (ΣX)(ΣY)) / (NΣX2 - (ΣX)2) Intercept(a) = (ΣY - b(ΣX)) / N where x and y are the variables. b = The slope of the regression line a = The intercept point of the regression line and the y axis. N = Number of values or elements X = First Score Y = Second Score ΣXY = Sum of the product of first and Second Scores ΣX = Sum of First Scores ΣY = Sum of Second Scores ΣX2 = Sum of square First Scores Regression Example: To find the Simple/Linear Regression of X Values 60 61 66 63 65 Y Values 3.1 3.2 3.8 4 4.1 To find regression equation, we will first find slope, intercept and use it to form regression equation.. Step 1: Count the number of values. N=5 Step 2: Find XY, X2
  • 53. Step 3: Find ΣX, ΣY, ΣXY, ΣX2. ΣX = 311 ΣY = 18.6 ΣXY = 1159.7 ΣX2 = 19359 Step 4: Substitute in the above slope formula given. Slope(b) = (NΣXY - (ΣX)(ΣY)) / (NΣX2 - (ΣX)2) = ((5)*(1159.7)-(311)*(18.6))/((5)*(19359)-(311)2) = (5798.5 - 5784.6)/(96795 - 96721) = 13.9/74 = 0.19 Step 5: Now, again substitute in the above intercept formula given. Intercept(a) = (ΣY - b(ΣX)) / N = (18.6 - 0.19(311))/5 = (18.6 - 59.09)/5 = -40.49/5 = -8.098 Step 6: Then substitute these values in regression equation formula Regression Equation(y) = a + bx = -8.098 + 0.19x. Suppose if we want to know the approximate y value for the variable x = 64. Then we can substitute the value in the above equation. Regression Equation(y) = a + bx = -8.098 + 0.19(64). = -8.098 + 12.16 = 4.06
  • 54. Coefficient of determination  It is used for statistical models whose main purpose is to predict the outcome of the future based by other related information.  Measures percentage variation in Y that can be explained by the X’s through the model Y=Xβ + ε  Proportionate reduction of total variation in Y associated with the use of the set of independent variables X1, X2, …, Xk (assuming a constant term is included in the model)  A goodness of fit measure Consider Tampa sales example. From printout, R2 = 0.9453. • Interpretation: 94% of the variability observed in sale prices can be explained by assessed values of homes. Thus, the assessed value of the home contributes a lot of information about the home’s sale price. • We can also find the pieces we need to compute R2 by hand in either JMP or SAS outputs: – SSyy is called Sum of Squares of Model in SAS and JMP SSE is called Sum of Squares of Error in both SAS and JMP. • In Tampa sales example, SSyy = 1673142, SSE = 96746 and thus R2 = 1673142 − 96746/1673142 = 0.94.
  • 55. F-test  F-test is a simultaneous test that if all of the beta=0 (it means that all of your x’s are useless) or at least one x is not equal to zero (which means that specific variable is affecting Y).  Ex:. The hypothesis that the means of several normally distributed populations, all having the same standard deviation, are equal. This is perhaps the best-known F-test, and plays an important role in the analysis of variance (ANOVA).  The hypothesis that a data set in a regression analysis follows the simpler of two proposed linear models that are nested within each other.
  • 56. Multiple Regression  It’s purpose is to learn more about the relationship between several independent and dependent variables.  Ex. A car agent having a listing of the following cars in the following characteristics of that car—transportation, comfort, style, luxury, fuel economy, etc. Once these following information has been compiled for the various cars, it will be interesting to see whether and how these measures relate to the price for which a car is sold. For example, the space of this car is better analyst of the price of which a car sells than how luxurious the car is.
  • 58. Are used when the underlying relation between X and Y plots as a curve, rather than a straight line.
  • 59.
  • 60.
  • 61.  An analyst would use nonlinear regression model when the scatter diagram shows a curvilinear pattern.  Nonlinear regression is a general technique to fit a curve through your data.  The purpose of linear regression is to find the line that comes closest to your data.
  • 62.  Quadratic Regression Model  Log-Linear Regression Model
  • 63.  One of the most useful nonlinear forms for managerial economics  expressed as Y = a + bX + cX2  Nonlinear model to Linear model  Create a new variable. ―Z‖ defined as Z = X2  Y= a + bX + cZ
  • 64.  Run Regression of Y from X and Z Dependent Variable: Y F-Ratio: 13.11 R- Y X Z Observations: 12 squared: 0.75 82 3 9 107 3 9 Parameter Standard T- 61 4 16 Variable Intercept Error Ratio 77 5 25 68 6 36 Intercept 140 17.14 8.17 30 8 64 X -20 4.14 -4.83 57 10 100 Z 1.01 0.5 2 40 12 144 82 14 196 68 15 225 102 17 289 110 18 324
  • 65.  Estimated quadratic regression equation is  Y = 140.08 – 19.51X + 1.01X2  1.01 is also the slope parameter estimate for X2  The estimated equation can be used to estimate the value of Y for any particular value of X
  • 66. Example: if X = 10  Y will be equal to 45.98 Y=140.08 – 19.51(10) + 1.01(10)2  After which, perform a t-tests to determine the statistical significance of each parameters.
  • 67. Y is related to one or more explanatory variables in a multiplicative form  Y=aXb Zc  Transform to a linear equation Percentage change in Y • b  Percentage change in X Percentage change in Y • c  Percentage change in Z
  • 68.  Parameters b and c are elasticities.  To transform the equation in to a linear form, we must use the natural logarithms of both sides of the equation.  lnY = (ln a) + b(ln X) + c(ln Z)  If we define: Y’ = a’ + bX’ + cZ’
  • 69. Example Variable: Y = aXb  Since Y is positive at all points, parameter a is expected to be positive.  Since Y is decreasing as X increases, the parameter X(b) is expected to be negative. Y X 2810 20 2825 30 2031 30 2228 40 1620 40 1836 50 1217 60 1110 90 1000 110 420 120 602 140 331 170
  • 70. To estimate the parameters a and b in a nonlinear equation, we transform the equation by taking logarithms: lnY = ln a + b lnX LOG Y LOG X 7.94094 2.99573 7.94626 3.4012 7.61628 3.4012 7.70886 3.68888 7.39018 3.68888 7.51534 3.91202 7.10414 4.09434 7.01212 4.49981 6.90776 4.70048 6.04025 4.78749 6.40026 4.94164 5.80212 5.1358
  • 71.  Run Regression Dependent Variable LOG Y F-Ratio 70 Observations 12 R-Square 0.875 Parameter Standard T- Variable Intercept Error Ratio Intercept 11.06 0.48 23.04 Log X -0.96 0.11 -8.73
  • 72. To obtain parameter estimates: note: the slope parameter on Log X is also the exponent on X in the linear equation. Y=aX(-0.96) note: since b is an elasticity, 10 percent increase in X results in a 9.6 percent decrease Y.  To obtain the estimate of a: note: we take the antilog of the estimated value of the intercept parameter. = antilog(11.06) = 63, 576 Y=63,576X(-0.96)
  • 73.  Showthat the two models are mathematically equivalent. logX = 4.5 logY = 6.74 = [11.06 – 0.96(4.5)] take the antilog of Y and X. X = 90 Y = 845 Y = 845 = [63,577(90) (-0.96) ]
  • 74.  Regression analysis is simply a tool to provide the necessary information for a manager to make decisions that maximizes profits.  Itoffers managers a way of estimating the functions they need for managerial decision making
  • 75. Reference: Managerial Economics fifth edition By Charles Maurice and Christopher R. Thomas