Array ( [0] => {{Short description|Instantaneous rate of change (mathematics)}} [1] => {{other uses|}} [2] => {{pp-semi-indef|small=yes}} [3] => {{good article}} [4] => {{Calculus |differential}} [5] => [6] => The '''derivative''' is a fundamental tool of [[calculus]] that quantifies the sensitivity of change of a [[Function (mathematics)|function]]'s output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is the [[slope]] of the [[Tangent|tangent line]] to the [[graph of a function|graph of the function]] at that point. The tangent line is the best [[linear approximation]] of the function near that input value. For this reason, the derivative is often described as the '''instantaneous rate of change''', the ratio of the instantaneous change in the dependent variable to that of the independent variable.{{sfn|Stewart|2002|p=129–130}} The process of finding a derivative is called '''differentiation'''. [7] => [8] => There are multiple different notations for differentiation, two of the most commonly used being [[Leibniz notation]] and prime notation. Leibniz notation, named after [[Gottfried Wilhelm Leibniz]], is represented as the ratio of two [[Differential (mathematics)|differentials]], whereas prime notation is written by adding a [[prime mark]]. Higher order notations represent repeated differentiation, and they are usually denoted in Leibniz notation by adding superscripts to the differentials, and in prime notation by adding additional prime marks. The higher order derivatives can be applied in physics; for example, while the first derivative of the position of a moving object with respect to [[time]] is the object's [[velocity]], how the position changes as time advances, the second derivative is the object's [[acceleration]], how the velocity changes as time advances. [9] => [10] => Derivatives can be generalized to [[function of several real variables|functions of several real variables]]. In this generalization, the derivative is reinterpreted as a [[linear transformation]] whose graph is (after an appropriate translation) the best linear approximation to the graph of the original function. The [[Jacobian matrix]] is the [[matrix (mathematics)|matrix]] that represents this linear transformation with respect to the basis given by the choice of independent and dependent variables. It can be calculated in terms of the [[partial derivative]]s with respect to the independent variables. For a [[real-valued function]] of several variables, the Jacobian matrix reduces to the [[gradient vector]]. [11] => [12] => ==Definition== [13] => ===As a limit=== [14] => A [[function of a real variable]] f(x) is [[Differentiable function|differentiable]] at a point a of its [[domain of a function|domain]], if its domain contains an [[open interval]] containing a , and the [[limit (mathematics)|limit]] [15] => L=\lim_{h \to 0}\frac{f(a+h)-f(a)}h [16] => exists.{{sfnm|1a1=Stewart|1y=2002|1p=127 | 2a1=Strang et al.|2y=2023|2p=[https://openstax.org/books/calculus-volume-1/pages/3-1-defining-the-derivative 220]}} This means that, for every positive [[real number]] \varepsilon, there exists a positive real number \delta such that, for every h such that |h| < \delta and h\ne 0 then f(a+h) is defined, and [17] => \left|L-\frac{f(a+h)-f(a)}h\right|<\varepsilon, [18] => where the vertical bars denote the [[absolute value]]. This is an example of the [[(ε, δ)-definition of limit]].{{sfn|Gonick|2012|p=83}} [19] => [20] => If the function f is differentiable at a , that is if the limit L exists, then this limit is called the ''derivative'' of f at a . Multiple notations for the derivative exist.{{sfnm|1a1=Gonick|1y=2012|1p=88 | 2a1=Strang et al.|2y=2023|2p=[https://openstax.org/books/calculus-volume-1/pages/3-2-the-derivative-as-a-function 234]}} The derivative of f at a can be denoted f'(a), read as " f prime of a "; or it can be denoted \frac{df}{dx}(a), read as "the derivative of f with respect to x at a " or " df by (or over) dx at a ". See {{slink||Notation}} below. If f is a function that has a derivative at every point in its [[domain of a function|domain]], then a function can be defined by mapping every point x to the value of the derivative of f at x . This function is written f' and is called the ''derivative function'' or the ''derivative of'' f . The function f sometimes has a derivative at most, but not all, points of its domain. The function whose value at a equals f'(a) whenever f'(a) is defined and elsewhere is undefined is also called the derivative of f . It is still a function, but its domain may be smaller than the domain of f .{{sfnm [21] => | 1a1 = Gonick | 1y = 2012 | 1p = 83 [22] => | 2a1 = Strang et al. | 2y = 2023 | 2p = [https://openstax.org/books/calculus-volume-1/pages/3-2-the-derivative-as-a-function 232] [23] => }} [24] => [25] => For example, let f be the squaring function: f(x) = x^2. Then the quotient in the definition of the derivative is{{sfn|Gonick|2012|pp=77–80}} [26] => \frac{f(a+h) - f(a)}{h} = \frac{(a+h)^2 - a^2}{h} = \frac{a^2 + 2ah + h^2 - a^2}{h} = 2a + h. [27] => The division in the last step is valid as long as h \neq 0. The closer h is to 0, the closer this expression becomes to the value 2a. The limit exists, and for every input a the limit is 2a. So, the derivative of the squaring function is the doubling function: f'(x) = 2x. [28] => [29] => {{multiple image [30] => | total_width = 480 [31] => | image1 = Tangent to a curve.svg [32] => | caption1 = The [[graph of a function]], drawn in black, and a [[tangent line]] to that graph, drawn in red. The [[slope]] of the tangent line is equal to the derivative of the function at the marked point. [33] => | image2 = Tangent function animation.gif [34] => | caption2 = The derivative at different points of a differentiable function. In this case, the derivative is equal to \sin \left(x^2\right) + 2x^2 \cos\left(x^2\right) [35] => }} [36] => The ratio in the definition of the derivative is the slope of the line through two points on the graph of the function f, specifically the points (a,f(a)) and (a+h, f(a+h)). As h is made smaller, these points grow closer together, and the slope of this line approaches the limiting value, the slope of the [[tangent]] to the graph of f at a. In other words, the derivative is the slope of the tangent.{{sfnm [37] => | 1a1 = Thompson | 1y = 1998 | 1pp = 34,104 [38] => | 2a1 = Stewart | 2y = 2002 | 2p = 128 [39] => }} [40] => [41] => ===Using infinitesimals=== [42] => One way to think of the derivative \frac{df}{dx}(a) is as the ratio of an [[infinitesimal]] change in the output of the function f to an infinitesimal change in its input.{{sfn|Thompson|1998|pp=84–85}} In order to make this intuition rigorous, a system of rules for manipulating infinitesimal quantities is required.{{sfn|Keisler|2012|pp=902–904}} The system of [[hyperreal number]]s is a way of treating [[Infinity|infinite]] and infinitesimal quantities. The hyperreals are an [[Field extension|extension]] of the [[real number]]s that contain numbers greater than anything of the form 1 + 1 + \cdots + 1 for any finite number of terms. Such numbers are infinite, and their [[Multiplicative inverse|reciprocal]]s are infinitesimals. The application of hyperreal numbers to the foundations of calculus is called [[nonstandard analysis]]. This provides a way to define the basic concepts of calculus such as the derivative and integral in terms of infinitesimals, thereby giving a precise meaning to the d in the Leibniz notation. Thus, the derivative of f(x) becomes f'(x) = \operatorname{st}\left( \frac{f(x + dx) - f(x)}{dx} \right) for an arbitrary infinitesimal dx, where \operatorname{st} denotes the [[standard part function]], which "rounds off" each finite hyperreal to the nearest real.{{sfnm [43] => | 1a1 = Keisler | 1y = 2012 | 1p = 45 [44] => | 2a1 = Henle | 2a2 = Kleinberg | 2y = 2003 | 2p = 66 [45] => }} Taking the squaring function f(x) = x^2 as an example again, [46] => \begin{align} [47] => f'(x) &= \operatorname{st}\left(\frac{x^2 + 2x \cdot dx + (dx)^2 -x^2}{dx}\right) \\ [48] => &= \operatorname{st}\left(\frac{2x \cdot dx + (dx)^2}{dx}\right) \\ [49] => &= \operatorname{st}\left(\frac{2x \cdot dx}{dx} + \frac{(dx)^2}{dx}\right) \\ [50] => &= \operatorname{st}\left(2x + dx\right) \\ [51] => &= 2x. [52] => \end{align} [53] => [54] => ==Continuity and differentiability== [55] => {{multiple image [56] => | total_width = 480 [57] => | image1 = Right-continuous.svg [58] => | caption1 = This function does not have a derivative at the marked point, as the function is not continuous there (specifically, it has a [[jump discontinuity]]). [59] => | image2 = Absolute value.svg [60] => | caption2 = The absolute value function is continuous but fails to be differentiable at {{math|''x'' {{=}} 0}} since the tangent slopes do not approach the same value from the left as they do from the right. [61] => }} [62] => If f is [[differentiable]] at a , then f must also be [[continuous function|continuous]] at a .{{sfn|Gonick|2012|p=156}} As an example, choose a point a and let f be the [[step function]] that returns the value 1 for all x less than a , and returns a different value 10 for all x greater than or equal to a . The function f cannot have a derivative at a . If h is negative, then a + h is on the low part of the step, so the secant line from a to a + h is very steep; as h tends to zero, the slope tends to infinity. If h is positive, then a + h is on the high part of the step, so the secant line from a to a + h has slope zero. Consequently, the secant lines do not approach any single slope, so the limit of the difference quotient does not exist. However, even if a function is continuous at a point, it may not be differentiable there. For example, the [[absolute value]] function given by f(x) = |x| is continuous at x = 0 , but it is not differentiable there. If h is positive, then the slope of the secant line from 0 to h is one; if h is negative, then the slope of the secant line from 0 to h is -1 .{{sfn|Gonick|2012|p=149}} This can be seen graphically as a "kink" or a "cusp" in the graph at x=0. Even a function with a smooth graph is not differentiable at a point where its [[Vertical tangent|tangent is vertical]]: For instance, the function given by f(x) = x^{1/3} is not differentiable at x = 0 . In summary, a function that has a derivative is continuous, but there are continuous functions that do not have a derivative.{{sfn|Gonick|2012|p=156}} [63] => [64] => Most functions that occur in practice have derivatives at all points or [[Almost everywhere|almost every]] point. Early in the [[history of calculus]], many mathematicians assumed that a continuous function was differentiable at most points.{{sfnm [65] => | 1a1 = Jašek | 1y = 1922 [66] => | 2a1 = Jarník | 2y = 1922 [67] => | 3a1 = Rychlík | 3y = 1923 [68] => }} Under mild conditions (for example, if the function is a [[monotone function|monotone]] or a [[Lipschitz function]]), this is true. However, in 1872, Weierstrass found the first example of a function that is continuous everywhere but differentiable nowhere. This example is now known as the [[Weierstrass function]].{{sfn|David|2018}} In 1931, [[Stefan Banach]] proved that the set of functions that have a derivative at some point is a [[meager set]] in the space of all continuous functions. Informally, this means that hardly any random continuous functions have a derivative at even one point.{{harvnb|Banach|1931}}, cited in {{harvnb|Hewitt|Stromberg|1965}}. [69] => [70] => == Notation == [71] => {{Main|Notation for differentiation}} [72] => One common symbol for the derivative of a function is [[Leibniz notation]]. They are written as the quotient of two [[differential (mathematics)|differentials]] dy and dx ,{{sfn|Apostol|1967|p=172}} which were introduced by [[Gottfried Leibniz|Gottfried Wilhelm Leibniz]] in 1675.{{sfn|Cajori|2007|p=204}} It is still commonly used when the equation y=f(x) is viewed as a functional relationship between [[dependent and independent variables]]. The first derivative is denoted by \frac{dy}{dx} , read as "the derivative of y with respect to x ".{{sfn|Moore|Siegel|2013|p=110}} This derivative can alternately be treated as the application of a [[differential operator]] to a function, \frac{dy}{dx} = \frac{d}{dx} f(x). Higher derivatives are expressed using the notation \frac{d^n y}{dx^n} for the n-th derivative of y = f(x). These are abbreviations for multiple applications of the derivative operator; for example, \frac{d^2y}{dx^2} = \frac{d}{dx}\Bigl(\frac{d}{dx} f(x)\Bigr).{{sfn|Varberg|Purcell|Rigdon|2007|p=125–126}} Unlike some alternatives, Leibniz notation involves explicit specification of the variable for differentiation, in the denominator, which removes ambiguity when working with multiple interrelated quantities. The derivative of a [[function composition|composed function]] can be expressed using the [[chain rule]]: if u = g(x) and y = f(g(x)) then \frac{dy}{dx} = \frac{dy}{du} \cdot \frac{du}{dx}.In the formulation of calculus in terms of limits, various authors have assigned the du symbol various meanings. Some authors such as {{harvnb|Varberg|Purcell|Rigdon|2007}}, p. 119 and {{harvnb|Stewart|2002}}, p. 177 do not assign a meaning to du by itself, but only as part of the symbol \frac{du}{dx} . Others define dx as an independent variable, and define du by du = dx f'(x). [73] => In [[non-standard analysis]] du is defined as an infinitesimal. It is also interpreted as the [[exterior derivative]] of a function u . See [[differential (infinitesimal)]] for further information. [74] => [75] => Another common notation for differentiation is by using the [[Prime (symbol)|prime mark]] in the symbol of a function f(x) . This is known as ''prime notation'', due to the [[Joseph-Louis Lagrange]].{{sfn|Schwartzman|1994|p=[https://books.google.com/books?id=PsH2DwAAQBAJ&pg=PA171 171]}} The first derivative is written as f'(x) , read as " f prime of x ", or y' , read as " y prime".{{sfnm [76] => | 1a1 = Moore | 1a2 = Siegel | 1y = 2013 | 1p = 110 [77] => | 2a1 = Goodman | 2y = 1963 | 2p = 78–79 [78] => }} Similarly, the second and the third derivatives can be written as f'' and f''' , respectively.{{sfnm [79] => | 1a1 = Varberg | 1a2 = Purcell | 1a3 = Rigdon | 1y = 2007 | 1p = 125–126 [80] => | 2a1 = Cajori | 2y = 2007 | 2p = 228 [81] => }} For denoting the number of higher derivatives beyond this point, some authors use Roman numerals in [[Subscript and superscript|superscript]], whereas others place the number in parentheses, such as f^{\mathrm{iv}} or f^{(4)}.{{sfnm [82] => | 1a1 = Choudary | 1a2 = Niculescu | 1y = 2014 | 1p = [https://books.google.com/books?id=I8aPBQAAQBAJ&pg=PA222 222] [83] => | 2a1 = Apostol | 2y = 1967 | 2p = 171 [84] => }} The latter notation generalizes to yield the notation f^{(n)} for the {{nowrap|1=n-}}th derivative of f.{{sfn|Varberg|Purcell|Rigdon|2007|p=125–126}} [85] => [86] => In [[Newton's notation]] or the ''dot notation,'' a dot is placed over a symbol to represent a time derivative. If y is a function of t , then the first and second derivatives can be written as \dot{y} and \ddot{y}, respectively. This notation is used exclusively for derivatives with respect to time or [[arc length]]. It is typically used in [[differential equation]]s in [[physics]] and [[differential geometry]].{{sfnm [87] => | 1a1 = Evans | 1y = 1999 | 1p = 63 [88] => | 2a1 = Kreyszig | 2y = 1991 | 2p = 1 [89] => }} However, the dot notation becomes unmanageable for high-order derivatives (of order 4 or more) and cannot deal with multiple independent variables. [90] => [91] => Another notation is ''D-notation'', which represents the differential operator by the symbol D.{{sfn|Varberg|Purcell|Rigdon|2007|p=125–126}} The first derivative is written D f(x) and higher derivatives are written with a superscript, so the n-th derivative is D^nf(x). This notation is sometimes called ''Euler notation'', although it seems that [[Leonhard Euler]] did not use it, and the notation was introduced by [[Louis François Antoine Arbogast]].{{sfn|Cajori|1923}} To indicate a partial derivative, the variable differentiated by is indicated with a subscript, for example given the function u = f(x, y), its partial derivative with respect to x can be written D_x u or D_x f(x,y). Higher partial derivatives can be indicated by superscripts or multiple subscripts, e.g. D_{xy} f(x,y) = \frac{\partial}{\partial y} \Bigl(\frac{\partial}{\partial x} f(x,y) \Bigr) and D_{x}^2 f(x,y) = \frac{\partial}{\partial x} \Bigl(\frac{\partial}{\partial x} f(x,y) \Bigr).{{sfnm [92] => | 1a1 = Apostol | 1y = 1967 | 1p = 172 [93] => | 2a1 = Varberg | 2a2 = Purcell | 2a3 = Rigdon | 2y = 2007 | 2p = 125–126 [94] => }} [95] => [96] => ==Rules of computation== [97] => {{Main|Differentiation rules}} [98] => In principle, the derivative of a function can be computed from the definition by considering the difference quotient and computing its limit. Once the derivatives of a few simple functions are known, the derivatives of other functions are more easily computed using ''rules'' for obtaining derivatives of more complicated functions from simpler ones. This process of finding a derivative is known as '''differentiation'''.{{sfn|Apostol|1967|p=160}} [99] => [100] => ===Rules for basic functions=== [101] => The following are the rules for the derivatives of the most common basic functions. Here, a is a real number, and e is [[e (mathematical constant)|the mathematical constant approximately {{nowrap|1=2.71828}}]].{{harvnb|Varberg|Purcell|Rigdon|2007}}. See p. 133 for the power rule, p. 115–116 for the trigonometric functions, p. 326 for the natural logarithm, p. 338–339 for exponential with base e , p. 343 for the exponential with base a , p. 344 for the logarithm with base a , and p. 369 for the inverse of trigonometric functions. [102] => [103] => * ''[[Power rule|Derivatives of powers]]'': [104] => *: \frac{d}{dx}x^a = ax^{a-1} [105] => [106] => * ''Functions of [[Exponential function|exponential]], [[natural logarithm]], and [[logarithm]] with general base'': [107] => *: \frac{d}{dx}e^x = e^x [108] => *: \frac{d}{dx}a^x = a^x\ln(a) , for a > 0 [109] => *: \frac{d}{dx}\ln(x) = \frac{1}{x} , for x > 0 [110] => *: \frac{d}{dx}\log_a(x) = \frac{1}{x\ln(a)} , for x, a > 0 [111] => [112] => * ''[[Trigonometric functions]]'': [113] => *: \frac{d}{dx}\sin(x) = \cos(x) [114] => *: \frac{d}{dx}\cos(x) = -\sin(x) [115] => *: \frac{d}{dx}\tan(x) = \sec^2(x) = \frac{1}{\cos^2(x)} = 1 + \tan^2(x) [116] => [117] => * ''[[Inverse trigonometric functions]]'': [118] => *: \frac{d}{dx}\arcsin(x) = \frac{1}{\sqrt{1-x^2}} , for -1 < x < 1 [119] => *: \frac{d}{dx}\arccos(x)= -\frac{1}{\sqrt{1-x^2}} , for -1 < x < 1 [120] => *: \frac{d}{dx}\arctan(x)= \frac{1}{{1+x^2}} [121] => [122] => [123] => ==={{anchor|Rules}}Rules for combined functions=== [124] => Given that the f and g are the functions. The following are some of the most basic rules for deducing the derivative of functions from derivatives of basic functions. For constant rule and sum rule, see {{harvnb|Apostol|1967|p=161, 164}}, respectively. For the product rule, quotient rule, and chain rule, see {{harvnb|Varberg|Purcell|Rigdon|2007|p=111–112, 119}}, respectively. For the special case of the product rule, that is, the product of a constant and a function, see {{harvnb|Varberg|Purcell|Rigdon|2007|p=108–109}}. [125] => * ''Constant rule'': if f is constant, then for all x, [126] => *: f'(x) = 0. [127] => * ''[[Linearity of differentiation|Sum rule]]'': [128] => *: (\alpha f + \beta g)' = \alpha f' + \beta g' for all functions f and g and all real numbers ''\alpha'' and ''\beta''. [129] => * ''[[Product rule]]'': [130] => *: (fg)' = f 'g + fg' for all functions f and g. As a special case, this rule includes the fact (\alpha f)' = \alpha f' whenever \alpha is a constant because \alpha' f = 0 \cdot f = 0 by the constant rule. [131] => * ''[[Quotient rule]]'': [132] => *: \left(\frac{f}{g} \right)' = \frac{f'g - fg'}{g^2} for all functions f and g at all inputs where {{nowrap|''g'' ≠ 0}}. [133] => * ''[[Chain rule]]'' for [[Function composition|composite functions]]: If f(x) = h(g(x)), then [134] => *: f'(x) = h'(g(x)) \cdot g'(x). [135] => [136] => === Computation example === [137] => The derivative of the function given by f(x) = x^4 + \sin \left(x^2\right) - \ln(x) e^x + 7 is [138] => \begin{align} [139] => f'(x) &= 4 x^{(4-1)}+ \frac{d\left(x^2\right)}{dx}\cos \left(x^2\right) - \frac{d\left(\ln {x}\right)}{dx} e^x - \ln(x) \frac{d\left(e^x\right)}{dx} + 0 \\ [140] => &= 4x^3 + 2x\cos \left(x^2\right) - \frac{1}{x} e^x - \ln(x) e^x. [141] => \end{align} [142] => Here the second term was computed using the [[chain rule]] and the third term using the [[product rule]]. The known derivatives of the elementary functions x^2 , x^4 , \sin (x) , \ln (x) , and \exp(x) = e^x , as well as the constant 7 , were also used. [143] => [144] => == Higher-order derivatives{{anchor|order of derivation|Order}} == [145] => ''Higher order derivatives'' means that a function is differentiated repeatedly. Given that f is a differentiable function, the derivative of f is the first derivative, denoted as f' . The derivative of f' is the [[second derivative]], denoted as f'' , and the derivative of f'' is the [[third derivative]], denoted as f''' . By continuing this process, if it exists, the {{nowrap|1= n -}}th derivative as the derivative of the {{nowrap|1= (n - 1) -}}th derivative or the ''derivative of order n ''. As has been [[#Notation|discussed above]], the generalization of derivative of a function f may be denoted as f^{(n)} .{{sfnm [146] => | 1a1 = Apostol | 1y = 1967 | 1p = 160 [147] => | 2a1 = Varberg | 2a2 = Purcell | 2a3 = Rigdon | 2y = 2007 | 2p = 125–126 [148] => }} A function that has k successive derivatives is called '' k times differentiable''. If the {{nowrap|1= k -}}th derivative is continuous, then the function is said to be of [[differentiability class]] C^k .{{sfn|Warner|1983|p=5}} A function that has infinitely many derivatives is called ''infinitely differentiable'' or ''[[smoothness|smooth]]''.{{sfn|Debnath|Shah|2015|p=[https://books.google.com/books?id=qPuWBQAAQBAJ&pg=PA40 40]}} One example of the infinitely differentiable function is [[polynomial]]; differentiate this function repeatedly results the [[constant function]], and the infinitely subsequent derivative of that function are all zero.{{sfn|Carothers|2000|p=[https://books.google.com/books?id=4VFDVy1NFiAC&pg=PA176 176]}} [149] => [150] => {{anchor|1=Instantaneous rate of change}}In [[Differential calculus#Applications of derivatives|one of its applications]], the higher-order derivatives may have specific interpretations in [[physics]]. Suppose that a function represents the position of an object at the time. The first derivative of that function is the [[velocity]] of an object with respect to time, the second derivative of the function is the [[acceleration]] of an object with respect to time,{{sfn|Apostol|1967|p=160}} and the third derivative is the [[jerk (physics)|jerk]].{{sfn|Stewart|2002|p=193}} [151] => [152] => ==In other dimensions== [153] => {{See also|Vector calculus|Multivariable calculus}} [154] => [155] => ===Vector-valued functions=== [156] => A [[vector-valued function]] \mathbf{y} of a real variable sends real numbers to vectors in some [[vector space]] \R^n . A vector-valued function can be split up into its coordinate functions y_1(t), y_2(t), \dots, y_n(t) , meaning that \mathbf{y} = (y_1(t), y_2(t), \dots, y_n(t)). This includes, for example, [[parametric curve]]s in \R^2 or \R^3 . The coordinate functions are real-valued functions, so the above definition of derivative applies to them. The derivative of \mathbf{y}(t) is defined to be the [[Vector (geometric)|vector]], called the [[Differential geometry of curves|tangent vector]], whose coordinates are the derivatives of the coordinate functions. That is,{{sfn|Stewart|2002|p=893}} [157] => \mathbf{y}'(t)=\lim_{h\to 0}\frac{\mathbf{y}(t+h) - \mathbf{y}(t)}{h}, [158] => if the limit exists. The subtraction in the numerator is the subtraction of vectors, not scalars. If the derivative of \mathbf{y} exists for every value of t , then \mathbf{y} is another vector-valued function.{{sfn|Stewart|2002|p=893}} [159] => [160] => ===Partial derivatives=== [161] => {{Main|Partial derivative}} [162] => Functions can depend upon [[function (mathematics)#Multivariate function|more than one variable]]. A [[partial derivative]] of a function of several variables is its derivative with respect to one of those variables, with the others held constant. Partial derivatives are used in [[vector calculus]] and [[differential geometry]]. As with ordinary derivatives, multiple notations exist: the partial derivative of a function f(x, y, \dots) with respect to the variable x is variously denoted by [163] => {{block indent | em = 1.2 | text = f_x, f'_x, \partial_x f, \frac{\partial}{\partial x}f, or \frac{\partial f}{\partial x},}} [164] => among other possibilities.{{sfnm [165] => | 1a1 = Stewart | 1y = 2002 | 1p = [https://archive.org/details/calculus0000stew/page/947/mode/1up 947] [166] => | 2a1 = Christopher | 2y = 2013 | 2p = 682 [167] => }} It can be thought of as the rate of change of the function in the x-direction.{{sfn|Stewart|2002|p=[https://archive.org/details/calculus0000stew/page/949 949]}} Here [[∂]] is a rounded ''d'' called the '''partial derivative symbol'''. To distinguish it from the letter ''d'', ∂ is sometimes pronounced "der", "del", or "partial" instead of "dee".{{sfnm [168] => | 1a1 = Silverman | 1y = 1989 | 1p = [https://books.google.com/books?id=CQ-kqE9Yo9YC&pg=PA216 216] [169] => | 2a1 = Bhardwaj | 2y= 2005 | 2loc = See [https://books.google.com/books?id=qSlGMwpNueoC&pg=SA6-PA4 p. 6.4] [170] => }} For example, let f(x,y) = x^2 + xy + y^2, then the partial derivative of function f with respect to both variables x and y are, respectively: [171] => \frac{\partial f}{\partial x} = 2x + y, \qquad \frac{\partial f}{\partial y} = x + 2y. [172] => In general, the partial derivative of a function f(x_1, \dots, x_n) in the direction x_i at the point (a_1, \dots, a_n) is defined to be:{{sfn|Mathai|Haubold|2017|p=[https://books.google.com/books?id=v20uDwAAQBAJ&pg=PA52 52]}} [173] => \frac{\partial f}{\partial x_i}(a_1,\ldots,a_n) = \lim_{h \to 0}\frac{f(a_1,\ldots,a_i+h,\ldots,a_n) - f(a_1,\ldots,a_i,\ldots,a_n)}{h}. [174] => [175] => This is fundamental for the study of the [[functions of several real variables]]. Let f(x_1, \dots, x_n) be such a [[real-valued function]]. If all partial derivatives f with respect to x_j are defined at the point (a_1, \dots, a_n) , these partial derivatives define the vector [176] => \nabla f(a_1, \ldots, a_n) = \left(\frac{\partial f}{\partial x_1}(a_1, \ldots, a_n), \ldots, \frac{\partial f}{\partial x_n}(a_1, \ldots, a_n)\right), [177] => which is called the [[gradient]] of f at a . If f is differentiable at every point in some domain, then the gradient is a [[vector-valued function]] \nabla f that maps the point (a_1, \dots, a_n) to the vector \nabla f(a_1, \dots, a_n) . Consequently, the gradient determines a [[vector field]].{{sfn|Gbur|2011|pp=36–37}} [178] => [179] => ===Directional derivatives=== [180] => {{Main|Directional derivative}} [181] => [182] => If f is a real-valued function on \R^n , then the partial derivatives of f measure its variation in the direction of the coordinate axes. For example, if f is a function of x and y , then its partial derivatives measure the variation in f in the x and y direction. However, they do not directly measure the variation of f in any other direction, such as along the diagonal line y = x . These are measured using directional derivatives. Choose a vector \mathbf{v} = (v_1,\ldots,v_n) , then the [[directional derivative]] of f in the direction of \mathbf{v} at the point \mathbf{x} is:{{sfn|Varberg|Purcell|Rigdon|2007|p=642}} [183] => D_{\mathbf{v}}{f}(\mathbf{x}) = \lim_{h \rightarrow 0}{\frac{f(\mathbf{x} + h\mathbf{v}) - f(\mathbf{x})}{h}}. [184] => [188] => [189] => If all the partial derivatives of f exist and are continuous at \mathbf{x} , then they determine the directional derivative of f in the direction \mathbf{v} by the formula:{{sfn|Guzman|2003|p=[https://books.google.com/books?id=aI_qBwAAQBAJ&pg=PA35 35]}} [190] => D_{\mathbf{v}}{f}(\mathbf{x}) = \sum_{j=1}^n v_j \frac{\partial f}{\partial x_j}. [191] => [192] => ===Total derivative, total differential and Jacobian matrix=== [193] => {{Main|Total derivative}} [194] => [195] => When f is a function from an open subset of \R^n to \R^m , then the directional derivative of f in a chosen direction is the best linear approximation to f at that point and in that direction. However, when n > 1 , no single directional derivative can give a complete picture of the behavior of f . The total derivative gives a complete picture by considering all directions at once. That is, for any vector \mathbf{v} starting at \mathbf{a} , the linear approximation formula holds:{{sfn|Davvaz|2023|p=[https://books.google.com/books?id=ofzKEAAAQBAJ&pg=PA266 266]}} [196] => f(\mathbf{a} + \mathbf{v}) \approx f(\mathbf{a}) + f'(\mathbf{a})\mathbf{v}. [197] => Similarly with the single-variable derivative, f'(\mathbf{a}) is chosen so that the error in this approximation is as small as possible. The total derivative of f at \mathbf{a} is the unique linear transformation f'(\mathbf{a}) \colon \R^n \to \R^m such that{{sfn|Davvaz|2023|p=[https://books.google.com/books?id=ofzKEAAAQBAJ&pg=PA266 266]}} [198] => \lim_{\mathbf{h}\to 0} \frac{\lVert f(\mathbf{a} + \mathbf{h}) - (f(\mathbf{a}) + f'(\mathbf{a})\mathbf{h})\rVert}{\lVert\mathbf{h}\rVert} = 0. [199] => Here \mathbf{h} is a vector in \R^n , so the norm in the denominator is the standard length on \R^n . However, f'(\mathbf{a}) \mathbf{h} is a vector in \R^m , and the norm in the numerator is the standard length on \R^m .{{sfn|Davvaz|2023|p=[https://books.google.com/books?id=ofzKEAAAQBAJ&pg=PA266 266]}} If v is a vector starting at a , then f'(\mathbf{a}) \mathbf{v} is called the [[pushforward (differential)|pushforward]] of \mathbf{v} by f .{{sfn|Lee|2013|p=72}} [200] => [201] => If the total derivative exists at \mathbf{a} , then all the partial derivatives and directional derivatives of f exist at \mathbf{a} , and for all \mathbf{v} , f'(\mathbf{a})\mathbf{v} is the directional derivative of f in the direction \mathbf{v} . If f is written using coordinate functions, so that f = (f_1, f_2, \dots, f_m) , then the total derivative can be expressed using the partial derivatives as a [[matrix (mathematics)|matrix]]. This matrix is called the [[Jacobian matrix]] of f at \mathbf{a} :{{sfn|Davvaz|2023|p=[https://books.google.com/books?id=ofzKEAAAQBAJ&pg=PA267 267]}} [202] => f'(\mathbf{a}) = \operatorname{Jac}_{\mathbf{a}} = \left(\frac{\partial f_i}{\partial x_j}\right)_{ij}. [203] => [222] => [223] => ==Generalizations== [224] => {{Main|Generalizations of the derivative}} [225] => [226] => The concept of a derivative can be extended to many other settings. The common thread is that the derivative of a function at a point serves as a [[linear approximation]] of the function at that point. [227] => * An important generalization of the derivative concerns [[complex function]]s of [[Complex number|complex variable]]s, such as functions from (a domain in) the complex numbers \C to \C. The notion of the derivative of such a function is obtained by replacing real variables with complex variables in the definition.{{sfn|Roussos|2014|p=303}} If \C is identified with \R^2 by writing a complex number z as x+iy, then a differentiable function from \C to \C is certainly differentiable as a function from \R^2 to \R^2 (in the sense that its partial derivatives all exist), but the converse is not true in general: the complex derivative only exists if the real derivative is ''complex linear'' and this imposes relations between the partial derivatives called the [[Cauchy–Riemann equations]] – see [[holomorphic function]]s.{{sfn|Gbur|2011|pp=261–264}} [228] => * Another generalization concerns functions between [[smooth manifold|differentiable or smooth manifolds]]. Intuitively speaking such a manifold M is a space that can be approximated near each point x by a vector space called its [[tangent space]]: the prototypical example is a [[smooth surface]] in \R^3. The derivative (or differential) of a (differentiable) map f:M\to N between manifolds, at a point x in M, is then a [[linear map]] from the tangent space of M at x to the tangent space of N at f(x). The derivative function becomes a map between the [[tangent bundle]]s of M and N. This definition is used in [[differential geometry]].{{sfn|Gray|Abbena|Salamon|2006|p=[https://books.google.com/books?id=owEj9TMYo7IC&pg=PA826 826]}} [229] => * Differentiation can also be defined for maps between [[vector space]], such as [[Banach space]], in which those generalizations are the [[Gateaux derivative]] and the [[Fréchet derivative]].{{harvnb|Azegami|2020}}. See p. [https://books.google.com/books?id=e08AEAAAQBAJ&pg=PA209 209] for the Gateaux derivative, and p. [https://books.google.com/books?id=e08AEAAAQBAJ&pg=PA211 211] for the Fréchet derivative. [230] => * One deficiency of the classical derivative is that very many functions are not differentiable. Nevertheless, there is a way of extending the notion of the derivative so that all [[continuous function|continuous]] functions and many other functions can be differentiated using a concept known as the [[weak derivative]]. The idea is to embed the continuous functions in a larger space called the space of [[distribution (mathematics)|distributions]] and only require that a function is differentiable "on average".{{sfn|Funaro|1992|p=[https://books.google.com/books?id=CX4SXf3mdeUC&pg=PA84 84–85]}} [231] => * Properties of the derivative have inspired the introduction and study of many similar objects in algebra and topology; an example is [[differential algebra]]. Here, it consists of the derivation of some topics in abstract algebra, such as [[Ring (mathematics)|rings]], [[Ideal (ring theory)|ideals]], [[Field (mathematics)|field]], and so on.{{sfn|Kolchin|1973|p=[https://books.google.com/books?id=yDCfhIjka-8C&pg=PA58 58], [https://books.google.com/books?id=yDCfhIjka-8C&pg=PA126 126]}} [232] => * The discrete equivalent of differentiation is [[finite difference]]s. The study of differential calculus is unified with the calculus of finite differences in [[time scale calculus]].{{sfn|Georgiev|2018|p=[https://books.google.com/books?id=OJJVDwAAQBAJ&pg=PA8 8]}} [233] => * The [[arithmetic derivative]] involves the function that is defined for the [[Integer|integers]] by the [[prime factorization]]. This is an analogy with the product rule.{{sfn|Barbeau|1961}} [234] => [235] => ==See also== [236] => * [[Integral]] [237] => [238] => == Notes == [239] => {{reflist}} [240] => [241] => == References == [242] => {{refbegin|30em}} [243] => *{{Citation [244] => | last = Apostol [245] => | first = Tom M. [246] => | author-link = Tom M. Apostol [247] => | date = June 1967 [248] => | title = Calculus, Vol. 1: One-Variable Calculus with an Introduction to Linear Algebra [249] => | publisher = Wiley [250] => | edition = 2nd [251] => | volume = 1 [252] => | isbn = 978-0-471-00005-1 [253] => | url-access = registration [254] => | url = https://archive.org/details/calculus01apos [255] => }} [256] => * {{citation [257] => | last = Azegami | first = Hideyuki [258] => | year = 2020 [259] => | title = Shape Optimization Problems [260] => | series = Springer Optimization and Its Applications [261] => | volume = 164 [262] => | publisher = Springer [263] => | url = https://books.google.com/books?id=e08AEAAAQBAJ [264] => | doi = 10.1007/978-981-15-7618-8 [265] => | isbn = 978-981-15-7618-8 [266] => | s2cid = 226442409 [267] => }} [268] => *{{Citation [269] => | last = Banach |first = Stefan | author-link = Stefan Banach [270] => | title = Uber die Baire'sche Kategorie gewisser Funktionenmengen [271] => | journal = Studia Math. [272] => | volume = 3 [273] => | issue = 3 [274] => | year = 1931 [275] => | pages = 174–179 [276] => | doi = 10.4064/sm-3-1-174-179 [277] => | postscript = . [278] => | url = https://scholar.google.com/scholar?output=instlink&q=info:SkKdCEmUd6QJ:scholar.google.com/&hl=en&as_sdt=0,50&scillfp=3432975470163241186&oi=lle [279] => | doi-access = free}} [280] => * {{cite journal [281] => | last = Barbeau | first = E. J. [282] => | title = Remarks on an arithmetic derivative [283] => | journal = [[Canadian Mathematical Bulletin]] [284] => | volume = 4 [285] => | year = 1961 [286] => | issue = 2 [287] => | pages = 117–122 [288] => | doi = 10.4153/CMB-1961-013-0 [289] => | zbl = 0101.03702 [290] => | doi-access = free [291] => }} [292] => * {{citation [293] => | last = Bhardwaj | first = R. S. [294] => | year = 2005 [295] => | title = Mathematics for Economics & Business [296] => | edition = 2nd [297] => | publisher = Excel Books India [298] => | isbn = 9788174464507 [299] => }} [300] => *{{Citation [301] => | last = Cajori | first = Florian [302] => | author-link = Florian Cajori [303] => | year = 1923 [304] => | title = The History of Notations of the Calculus [305] => | journal = Annals of Mathematics [306] => | volume = 25 [307] => | issue = 1 [308] => | pages = 1–46 [309] => | doi = 10.2307/1967725 [310] => | jstor = 1967725 [311] => | hdl = 2027/mdp.39015017345896 [312] => | hdl-access = free [313] => }} [314] => *{{Citation [315] => | last = Cajori [316] => | first = Florian [317] => | title = A History of Mathematical Notations [318] => | year = 2007 [319] => | url = https://books.google.com/books?id=RUz1Us2UDh4C&pg=PA204 [320] => | volume = 2 [321] => | publisher = Cosimo Classics [322] => | isbn = 978-1-60206-713-4}} [323] => * {{citation [324] => | last = Carothers | first = N. L. [325] => | year = 2000 [326] => | title = Real Analysis [327] => | publisher = Cambridge University Press [328] => }} [329] => *{{Citation [330] => | last1 = Choudary | first1 = A. D. R. [331] => | last2 = Niculescu | first2 = Constantin P. [332] => | year = 2014 [333] => | title = Real Analysis on Intervals [334] => | publisher = Springer India [335] => | doi = 10.1007/978-81-322-2148-7 [336] => | isbn = 978-81-322-2148-7 [337] => }} [338] => * {{citation [339] => | last = Christopher | first = Essex [340] => | year = 2013 [341] => | title = Calculus: A complete course [342] => | page = 682 [343] => | publisher = Pearson [344] => | isbn = 9780321781079 [345] => | oclc = 872345701 [346] => }} [347] => *{{Citation [348] => | last1 = Courant [349] => | first1 = Richard [350] => | author-link1 = Richard Courant [351] => | last2 = John [352] => | first2 = Fritz [353] => | author-link2 = Fritz John [354] => | date = December 22, 1998 [355] => | title = Introduction to Calculus and Analysis, Vol. 1 [356] => | publisher = [[Springer-Verlag]] [357] => | isbn = 978-3-540-65058-4 [358] => | doi = 10.1007/978-1-4613-8955-2 [359] => }} [360] => * {{citation [361] => | last = David | first = Claire [362] => | year = 2018 [363] => | title = Bypassing dynamical systems: A simple way to get the box-counting dimension of the graph of the Weierstrass function [364] => | journal = Proceedings of the International Geometry Center [365] => | publisher = Academy of Sciences of Ukraine [366] => | volume = 11 [367] => | issue = 2 [368] => | pages = 53–68 [369] => | doi = 10.15673/tmgc.v11i2.1028 [370] => | doi-access = free [371] => | arxiv = 1711.10349 [372] => }} [373] => * {{citation [374] => | last = Davvaz | first = Bijan [375] => | year = 2023 [376] => | title = Vectors and Functions of Several Variables [377] => | publisher = Springer [378] => | doi = 10.1007/978-981-99-2935-1 [379] => | isbn = 978-981-99-2935-1 [380] => | s2cid = 259885793 [381] => }} [382] => * {{citation [383] => | last1 = Debnath | first1 = Lokenath [384] => | last2 = Shah | first2 = Firdous Ahmad [385] => | year = 2015 [386] => | title = Wavelet Transforms and Their Applications [387] => | edition = 2nd [388] => | publisher = Birkhäuser [389] => | url = https://books.google.com/books?id=qPuWBQAAQBAJ [390] => | doi = 10.1007/978-0-8176-8418-1 [391] => | isbn = 978-0-8176-8418-1 [392] => }} [393] => * {{Citation [394] => | last = Evans [395] => | first = Lawrence [396] => | author-link = Lawrence Evans [397] => | title = Partial Differential Equations [398] => | publisher = American Mathematical Society [399] => | year = 1999 [400] => | isbn = 0-8218-0772-2 [401] => }} [402] => *{{Citation [403] => | last = Eves [404] => | first = Howard [405] => | date = January 2, 1990 [406] => | title = An Introduction to the History of Mathematics [407] => | edition = 6th [408] => | publisher = Brooks Cole [409] => | isbn = 978-0-03-029558-4 [410] => }} [411] => * {{citation [412] => | last = Funaro | first = Daniele [413] => | year = 1992 [414] => | title = Polynomial Approximation of Differential Equations [415] => | series = Lecture Notes in Physics Monographs [416] => | volume = 8 [417] => | publisher = Springer [418] => | url = https://books.google.com/books?id=CX4SXf3mdeUC [419] => | doi = 10.1007/978-3-540-46783-0 [420] => | isbn = 978-3-540-46783-0 [421] => }} [422] => *{{citation [423] => | last = Gbur [424] => | first = Greg [425] => | author-link = Greg Gbur [426] => | year = 2011 [427] => | title = Mathematical Methods for Optical Physics and Engineering [428] => | publisher = Cambridge University Press [429] => | bibcode = 2011mmop.book.....G [430] => | isbn = 978-1-139-49269-0 [431] => }} [432] => * {{citation [433] => | last = Georgiev | first = Svetlin G. [434] => | year = 2018 [435] => | title = Fractional Dynamic Calculus and Fractional Dynamic Equations on Time Scales [436] => | url = https://books.google.com/books?id=OJJVDwAAQBAJ [437] => | publisher = Springer [438] => | doi = 10.1007/978-3-319-73954-0 [439] => | isbn = 978-3-319-73954-0 [440] => }} [441] => * {{citation [442] => | last = Goodman | first = A. W. [443] => | year = 1963 [444] => | title = Analytic Geometry and the Calculus [445] => | publisher = The MacMillan Company [446] => | url = https://books.google.com/books?id=v7i2YJJg_gcC [447] => }} [448] => *{{citation [449] => | last = Gonick [450] => | first = Larry [451] => | author-link = Larry Gonick [452] => | year = 2012 [453] => | title = The Cartoon Guide to Calculus [454] => | publisher = William Morrow [455] => | url = https://www.larrygonick.com/titles/science/cartoon-guide-to-calculus-2/ [456] => | isbn = 978-0-06-168909-3 [457] => }} [458] => * {{citation [459] => | last1 = Gray | first1 = Alfred [460] => | last2 = Abbena | first2 = Elsa [461] => | last3 = Salamon | first3 = Simon [462] => | year = 2006 [463] => | title = Modern Differential Geometry of Curves and Surfaces with Mathematica [464] => | publisher = CRC Press [465] => | isbn = 978-1-58488-448-4 [466] => | url = https://books.google.com/books?id=owEj9TMYo7IC [467] => }} [468] => * {{citation [469] => | last = Guzman | first = Alberto [470] => | year = 2003 [471] => | title = Derivatives and Integrals of Multivariable Functions [472] => | url = https://books.google.com/books?id=aI_qBwAAQBAJ [473] => | publisher = Springer [474] => | doi = 10.1007/978-1-4612-0035-2 [475] => | isbn = 978-1-4612-0035-2 [476] => }} [477] => * {{citation [478] => | last1 = Henle | first1 = James M. [479] => | last2 = Kleinberg | first2 = Eugene M. [480] => | year = 2003 [481] => | title = Infinitesimal Calculus [482] => | publisher = Dover Publications [483] => | isbn = 978-0-486-42886-4 [484] => }} [485] => *{{Citation [486] => | last1 = Hewitt | first1 = Edwin | author-link1 = Edwin Hewitt [487] => | last2 = Stromberg | first2 = Karl R. [488] => | title = Real and abstract analysis [489] => | publisher = Springer-Verlag [490] => | year = 1965 [491] => | pages = Theorem 17.8 [492] => | doi = 10.1007/978-3-662-29794-0 [493] => | isbn = 978-3-662-28275-5 | no-pp = true}} [494] => * {{citation [495] => | last = Jašek | first = Martin [496] => | year = 1922 [497] => | url = http://dml.cz/bitstream/handle/10338.dmlcz/121916/CasPestMatFys_051-1922-2_2.pdf [498] => | title = Funkce Bolzanova [499] => | journal = Časopis pro Pěstování Matematiky a Fyziky [500] => | volume = 51 [501] => | issue = 2 [502] => | pages = 69–76 [503] => | doi = 10.21136/CPMF.1922.121916 [504] => | lang = cs [505] => }} [506] => * {{citation [507] => | last = Jarník | first = Vojtěch | authorlink = Vojtěch Jarník [508] => | year = 1922 [509] => | title = O funkci Bolzanově [510] => | journal = Časopis pro Pěstování Matematiky a Fyziky [511] => | volume = 51 [512] => | issue = 4 [513] => | pages = 248–264 [514] => | doi = 10.21136/CPMF.1922.109021 | lang = cs [515] => | url = http://dml.cz/bitstream/handle/10338.dmlcz/109021/CasPestMatFys_051-1922-4_5.pdf}}. See the English version [http://dml.cz/bitstream/handle/10338.dmlcz/400073/Bolzano_15-1981-1_6.pdf here]. [516] => *{{Citation [517] => | last = Keisler [518] => | first = H. Jerome [519] => | year = 2012 [520] => | orig-year = 1986 [521] => | publisher = Prindle, Weber & Schmidt [522] => | edition = 2nd [523] => | title = Elementary Calculus: An Approach Using Infinitesimals [524] => | isbn = 978-0-871-50911-6 [525] => | url = http://www.math.wisc.edu/~keisler/calc.html [526] => }} [527] => * {{citation [528] => | last = Kolchin | first = Ellis [529] => | title = Differential Algebra And Algebraic Groups [530] => | year = 1973 [531] => | url = https://books.google.com/books?id=yDCfhIjka-8C [532] => | publisher = Academic Press [533] => | isbn = 978-0-08-087369-5 [534] => }} [535] => * {{Citation [536] => | last = Kreyszig [537] => | first = Erwin [538] => | title = Differential Geometry [539] => | publisher = [[Dover Publications|Dover]] [540] => | year = 1991 [541] => | isbn = 0-486-66721-9 [542] => | location = New York [543] => }} [544] => *{{Citation [545] => | last1 = Larson [546] => | first1 = Ron [547] => | last2 = Hostetler [548] => | first2 = Robert P. [549] => | last3 = Edwards [550] => | first3 = Bruce H. [551] => | date = February 28, 2006 [552] => | title = Calculus: Early Transcendental Functions [553] => | edition = 4th [554] => | publisher = Houghton Mifflin Company [555] => | isbn = 978-0-618-60624-5 [556] => }} [557] => * {{citation [558] => | last = Lee [559] => | first = John M. [560] => | title = Introduction to Smooth Manifolds [561] => | series = Graduate Texts in Mathematics [562] => | year = 2013 [563] => | volume = 218 [564] => | publisher = Springer [565] => | doi = 10.1007/978-0-387-21752-9 [566] => | isbn = 978-0-387-21752-9 [567] => }} [568] => * {{citation [569] => | last1 = Mathai | first1 = A. M. [570] => | last2 = Haubold | first2 = H. J. [571] => | year = 2017 [572] => | title = Fractional and Multivariable Calculus: Model Building and Optimization Problems [573] => | url = https://books.google.com/books?id=v20uDwAAQBAJ [574] => | publisher = Springer [575] => | doi = 10.1007/978-3-319-59993-9 [576] => | isbn = 978-3-319-59993-9 [577] => }} [578] => * {{citation [579] => | last1 = Moore | first1 = Will H. [580] => | last2 = Siegel | first2 = David A. [581] => | year = 2013 [582] => | title = A Mathematical Course for Political and Social Research [583] => | publisher = Princeton University Press [584] => | url = https://books.google.com/books?id=emmYDwAAQBAJ [585] => | isbn = 978-0-691-15995-9 [586] => }} [587] => *{{Citation [588] => | last = Roussos [589] => | first = Ioannis M. [590] => | title = Improper Riemann Integral [591] => | year = 2014 [592] => | publisher = [[CRC Press]] [593] => | url = https://books.google.com/books?id=nX1cAgAAQBAJ [594] => | isbn = 978-1-4665-8807-3 [595] => }} [596] => * {{citation [597] => | last = Rychlík | first = Karel [598] => | year = 1923 [599] => | title = Über eine Funktion aus Bolzanos handschriftlichem Nachlasse [600] => }} [601] => * {{citation [602] => | last = Schwartzman | first = Steven [603] => | year = 1994 [604] => | title = The Words of Mathematics: An Etymological Dictionary of Mathematical Terms Used in English [605] => | publisher = Mathematical Association of American [606] => | isbn = 9781614445012 [607] => | url = https://books.google.com/books?id=PsH2DwAAQBAJ [608] => }} [609] => * {{citation [610] => | last = Silverman | first = Richard A. [611] => | year = 1989 [612] => | title = Essential Calculus: With Applications [613] => | publisher = Courier Corporation [614] => | isbn = 9780486660974 [615] => }} [616] => *{{Citation [617] => | last = Stewart [618] => | first = James [619] => | author-link = James Stewart (mathematician) [620] => | date = December 24, 2002 [621] => | title = Calculus [622] => | publisher = Brooks Cole [623] => | edition = 5th [624] => | isbn = 978-0-534-39339-7 [625] => | url-access = registration [626] => | url = https://archive.org/details/calculus0000stew [627] => }} [628] => *{{Citation [629] => | last1 = Strang [630] => | first1 = Gilbert [631] => | author-link1 = Gilbert Strang [632] => | display-authors = etal [633] => | title = Calculus, volume 1 [634] => | publisher = OpenStax [635] => | year = 2023 [636] => | isbn = 978-1-947172-13-5 [637] => | url = https://openstax.org/details/books/calculus-volume-1 [638] => | ref = {{sfnref|Strang et al.|2023}} [639] => }} [640] => *{{Citation [641] => | last = Thompson [642] => | first = Silvanus P. [643] => | author-link = Silvanus P. Thompson [644] => | date = September 8, 1998 [645] => | title = [[Calculus Made Easy]] [646] => | edition = Revised, Updated, Expanded [647] => | place = New York [648] => | publisher = St. Martin's Press [649] => | isbn = 978-0-312-18548-0 [650] => }} [651] => * {{Citation [652] => | last1 = Varberg [653] => | first1 = Dale E. [654] => | last2 = Purcell [655] => | first2 = Edwin J. [656] => | last3 = Rigdon [657] => | first3 = Steven E. [658] => | title = Calculus [659] => | year = 2007 [660] => | publisher = [[Pearson Prentice Hall]] [661] => | edition = 9th [662] => | isbn = 978-0131469686 [663] => }} [664] => * {{Citation [665] => | last = Warner [666] => | first = Frank W. [667] => | author-link = Frank Wilson Warner [668] => | year = 1983 [669] => | title = Foundations of Differentiable Manifolds and Lie Groups [670] => | publisher = Springer [671] => | isbn = 978-0-387-90894-6 [672] => | url = https://books.google.com/books?id=t6PNrjnfhuIC [673] => }} [674] => {{Refend}} [675] => [676] => ==External links== [677] => {{Sister project links|Differentiation|auto=1|wikt=y|b=y|v=y}} [678] => *{{springer|title=Derivative|id=p/d031260}} [679] => *[[Khan Academy]]: [https://www.khanacademy.org/math/differential-calculus/taking-derivatives/intro_differential_calc/v/newton-leibniz-and-usain-bolt "Newton, Leibniz, and Usain Bolt"] [680] => *{{MathWorld |title=Derivative |id=Derivative}} [681] => *[http://www.wolframalpha.com/calculators/derivative-calculator/ Online Derivative Calculator] from [[Wolfram Alpha]]. [682] => [683] => {{good article}} [684] => [685] => {{Calculus topics}} [686] => {{Analysis-footer}} [687] => {{Authority control}} [688] => [689] => [[Category:Mathematical analysis]] [690] => [[Category:Differential calculus]] [691] => [[Category:Functions and mappings]] [692] => [[Category:Linear operators in calculus]] [693] => [[Category:Rates]] [694] => [[Category:Change]] [] => )
good wiki

Derivative

A derivative is a financial instrument that derives its value from an underlying asset or group of assets. This underlying asset can be anything from stocks and bonds to commodities and currencies.

More about us

About

This underlying asset can be anything from stocks and bonds to commodities and currencies. Derivatives serve as a way for investors to manage risk and speculate on price movements without directly owning the underlying asset. The concept of derivatives dates back to ancient civilizations, where merchants would enter into agreements to hedge against price fluctuations in goods. However, modern derivatives markets emerged in the 1970s as a result of increased globalization and advances in financial technology. There are several types of derivatives, including futures contracts, options, swaps, and forward contracts. Each type has its own unique characteristics and use cases. For example, futures contracts are standardized agreements to buy or sell an asset at a future date. Options give the holder the right, but not the obligation, to buy or sell an asset at a predetermined price, while swaps involve the exchange of cash flows between two parties. Derivatives can be used for various purposes, such as hedging against potential losses, speculating on price movements, and arbitraging price differences in different markets. However, they can also carry significant risks, especially for inexperienced investors who may not fully understand the complexity of these instruments. Derivatives markets have grown significantly over the years, with trillions of dollars' worth of contracts traded globally each day. This growth has led to increased scrutiny and regulation to ensure the stability and integrity of these markets. The Wikipedia page on derivatives provides detailed information on the history, types, uses, risks, and regulation of derivatives. It also includes information on notable events and controversies related to derivatives, such as the 2008 financial crisis and the role derivatives played in it. Overall, the Wikipedia page offers a comprehensive overview of derivatives, making it a valuable resource for anyone looking to gain a deeper understanding of these financial instruments.

Expert Team

Vivamus eget neque lacus. Pellentesque egauris ex.

Award winning agency

Lorem ipsum, dolor sit amet consectetur elitorceat .

10 Year Exp.

Pellen tesque eget, mauris lorem iupsum neque lacus.