< prev index next >

src/java.desktop/share/classes/java/awt/Graphics2D.java

Print this page


   1 /*
   2  * Copyright (c) 1996, 2013, Oracle and/or its affiliates. All rights reserved.
   3  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
   4  *
   5  * This code is free software; you can redistribute it and/or modify it
   6  * under the terms of the GNU General Public License version 2 only, as
   7  * published by the Free Software Foundation.  Oracle designates this
   8  * particular file as subject to the "Classpath" exception as provided
   9  * by Oracle in the LICENSE file that accompanied this code.
  10  *
  11  * This code is distributed in the hope that it will be useful, but WITHOUT
  12  * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
  13  * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
  14  * version 2 for more details (a copy is included in the LICENSE file that
  15  * accompanied this code).
  16  *
  17  * You should have received a copy of the GNU General Public License version
  18  * 2 along with this work; if not, write to the Free Software Foundation,
  19  * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
  20  *
  21  * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
  22  * or visit www.oracle.com if you need additional information or have any


  61  * the resolution might not be known when the rendering operations are
  62  * captured, the {@code Graphics2D Transform} is set up
  63  * to transform user coordinates to a virtual device space that
  64  * approximates the expected resolution of the target device. Further
  65  * transformations might need to be applied at playback time if the
  66  * estimate is incorrect.
  67  * <p>
  68  * Some of the operations performed by the rendering attribute objects
  69  * occur in the device space, but all {@code Graphics2D} methods take
  70  * user space coordinates.
  71  * <p>
  72  * Every {@code Graphics2D} object is associated with a target that
  73  * defines where rendering takes place. A
  74  * {@link GraphicsConfiguration} object defines the characteristics
  75  * of the rendering target, such as pixel format and resolution.
  76  * The same rendering target is used throughout the life of a
  77  * {@code Graphics2D} object.
  78  * <p>
  79  * When creating a {@code Graphics2D} object,  the
  80  * {@code GraphicsConfiguration}
  81  * specifies the <a name="deftransform">default transform</a> for
  82  * the target of the {@code Graphics2D} (a
  83  * {@link Component} or {@link Image}).  This default transform maps the
  84  * user space coordinate system to screen and printer device coordinates
  85  * such that the origin maps to the upper left hand corner of the
  86  * target region of the device with increasing X coordinates extending
  87  * to the right and increasing Y coordinates extending downward.
  88  * The scaling of the default transform is set to identity for those devices
  89  * that are close to 72 dpi, such as screen devices.
  90  * The scaling of the default transform is set to approximately 72 user
  91  * space coordinates per square inch for high resolution devices, such as
  92  * printers.  For image buffers, the default transform is the
  93  * {@code Identity} transform.
  94  *
  95  * <h2>Rendering Process</h2>
  96  * The Rendering Process can be broken down into four phases that are
  97  * controlled by the {@code Graphics2D} rendering attributes.
  98  * The renderer can optimize many of these steps, either by caching the
  99  * results for future calls, by collapsing multiple virtual steps into
 100  * a single operation, or by recognizing various attributes as common
 101  * simple cases that can be eliminated by modifying other parts of the


 112  * manipulation methods of {@code Graphics} and
 113  * {@code Graphics2D}.  This <i>user clip</i>
 114  * is transformed into device space by the current
 115  * {@code Transform} and combined with the
 116  * <i>device clip</i>, which is defined by the visibility of windows and
 117  * device extents.  The combination of the user clip and device clip
 118  * defines the <i>composite clip</i>, which determines the final clipping
 119  * region.  The user clip is not modified by the rendering
 120  * system to reflect the resulting composite clip.
 121  * <li>
 122  * Determine what colors to render.
 123  * <li>
 124  * Apply the colors to the destination drawing surface using the current
 125  * {@link Composite} attribute in the {@code Graphics2D} context.
 126  * </ol>
 127  * <br>
 128  * The three types of rendering operations, along with details of each
 129  * of their particular rendering processes are:
 130  * <ol>
 131  * <li>
 132  * <b><a name="rendershape">{@code Shape} operations</a></b>
 133  * <ol>
 134  * <li>
 135  * If the operation is a {@code draw(Shape)} operation, then
 136  * the  {@link Stroke#createStrokedShape(Shape) createStrokedShape}
 137  * method on the current {@link Stroke} attribute in the
 138  * {@code Graphics2D} context is used to construct a new
 139  * {@code Shape} object that contains the outline of the specified
 140  * {@code Shape}.
 141  * <li>
 142  * The {@code Shape} is transformed from user space to device space
 143  * using the current {@code Transform}
 144  * in the {@code Graphics2D} context.
 145  * <li>
 146  * The outline of the {@code Shape} is extracted using the
 147  * {@link Shape#getPathIterator(AffineTransform) getPathIterator} method of
 148  * {@code Shape}, which returns a
 149  * {@link java.awt.geom.PathIterator PathIterator}
 150  * object that iterates along the boundary of the {@code Shape}.
 151  * <li>
 152  * If the {@code Graphics2D} object cannot handle the curved segments
 153  * that the {@code PathIterator} object returns then it can call the
 154  * alternate
 155  * {@link Shape#getPathIterator(AffineTransform, double) getPathIterator}
 156  * method of {@code Shape}, which flattens the {@code Shape}.
 157  * <li>
 158  * The current {@link Paint} in the {@code Graphics2D} context
 159  * is queried for a {@link PaintContext}, which specifies the
 160  * colors to render in device space.
 161  * </ol>
 162  * <li>
 163  * <b><a name=rendertext>Text operations</a></b>
 164  * <ol>
 165  * <li>
 166  * The following steps are used to determine the set of glyphs required
 167  * to render the indicated {@code String}:
 168  * <ol>
 169  * <li>
 170  * If the argument is a {@code String}, then the current
 171  * {@code Font} in the {@code Graphics2D} context is asked to
 172  * convert the Unicode characters in the {@code String} into a set of
 173  * glyphs for presentation with whatever basic layout and shaping
 174  * algorithms the font implements.
 175  * <li>
 176  * If the argument is an
 177  * {@link AttributedCharacterIterator},
 178  * the iterator is asked to convert itself to a
 179  * {@link java.awt.font.TextLayout TextLayout}
 180  * using its embedded font attributes. The {@code TextLayout}
 181  * implements more sophisticated glyph layout algorithms that
 182  * perform Unicode bi-directional layout adjustments automatically
 183  * for multiple fonts of differing writing directions.
 184   * <li>
 185  * If the argument is a
 186  * {@link GlyphVector}, then the
 187  * {@code GlyphVector} object already contains the appropriate
 188  * font-specific glyph codes with explicit coordinates for the position of
 189  * each glyph.
 190  * </ol>
 191  * <li>
 192  * The current {@code Font} is queried to obtain outlines for the
 193  * indicated glyphs.  These outlines are treated as shapes in user space
 194  * relative to the position of each glyph that was determined in step 1.
 195  * <li>
 196  * The character outlines are filled as indicated above
 197  * under <a href="#rendershape">{@code Shape} operations</a>.
 198  * <li>
 199  * The current {@code Paint} is queried for a
 200  * {@code PaintContext}, which specifies
 201  * the colors to render in device space.
 202  * </ol>
 203  * <li>
 204  * <b><a name= renderingimage>{@code Image} Operations</a></b>
 205  * <ol>
 206  * <li>
 207  * The region of interest is defined by the bounding box of the source
 208  * {@code Image}.
 209  * This bounding box is specified in Image Space, which is the
 210  * {@code Image} object's local coordinate system.
 211  * <li>
 212  * If an {@code AffineTransform} is passed to
 213  * {@link #drawImage(java.awt.Image, java.awt.geom.AffineTransform, java.awt.image.ImageObserver) drawImage(Image, AffineTransform, ImageObserver)},
 214  * the {@code AffineTransform} is used to transform the bounding
 215  * box from image space to user space. If no {@code AffineTransform}
 216  * is supplied, the bounding box is treated as if it is already in user space.
 217  * <li>
 218  * The bounding box of the source {@code Image} is transformed from user
 219  * space into device space using the current {@code Transform}.
 220  * Note that the result of transforming the bounding box does not
 221  * necessarily result in a rectangular region in device space.
 222  * <li>
 223  * The {@code Image} object determines what colors to render,
 224  * sampled according to the source to destination


   1 /*
   2  * Copyright (c) 1996, 2017, Oracle and/or its affiliates. All rights reserved.
   3  * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
   4  *
   5  * This code is free software; you can redistribute it and/or modify it
   6  * under the terms of the GNU General Public License version 2 only, as
   7  * published by the Free Software Foundation.  Oracle designates this
   8  * particular file as subject to the "Classpath" exception as provided
   9  * by Oracle in the LICENSE file that accompanied this code.
  10  *
  11  * This code is distributed in the hope that it will be useful, but WITHOUT
  12  * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
  13  * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
  14  * version 2 for more details (a copy is included in the LICENSE file that
  15  * accompanied this code).
  16  *
  17  * You should have received a copy of the GNU General Public License version
  18  * 2 along with this work; if not, write to the Free Software Foundation,
  19  * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
  20  *
  21  * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
  22  * or visit www.oracle.com if you need additional information or have any


  61  * the resolution might not be known when the rendering operations are
  62  * captured, the {@code Graphics2D Transform} is set up
  63  * to transform user coordinates to a virtual device space that
  64  * approximates the expected resolution of the target device. Further
  65  * transformations might need to be applied at playback time if the
  66  * estimate is incorrect.
  67  * <p>
  68  * Some of the operations performed by the rendering attribute objects
  69  * occur in the device space, but all {@code Graphics2D} methods take
  70  * user space coordinates.
  71  * <p>
  72  * Every {@code Graphics2D} object is associated with a target that
  73  * defines where rendering takes place. A
  74  * {@link GraphicsConfiguration} object defines the characteristics
  75  * of the rendering target, such as pixel format and resolution.
  76  * The same rendering target is used throughout the life of a
  77  * {@code Graphics2D} object.
  78  * <p>
  79  * When creating a {@code Graphics2D} object,  the
  80  * {@code GraphicsConfiguration}
  81  * specifies the <a id="deftransform">default transform</a> for
  82  * the target of the {@code Graphics2D} (a
  83  * {@link Component} or {@link Image}).  This default transform maps the
  84  * user space coordinate system to screen and printer device coordinates
  85  * such that the origin maps to the upper left hand corner of the
  86  * target region of the device with increasing X coordinates extending
  87  * to the right and increasing Y coordinates extending downward.
  88  * The scaling of the default transform is set to identity for those devices
  89  * that are close to 72 dpi, such as screen devices.
  90  * The scaling of the default transform is set to approximately 72 user
  91  * space coordinates per square inch for high resolution devices, such as
  92  * printers.  For image buffers, the default transform is the
  93  * {@code Identity} transform.
  94  *
  95  * <h2>Rendering Process</h2>
  96  * The Rendering Process can be broken down into four phases that are
  97  * controlled by the {@code Graphics2D} rendering attributes.
  98  * The renderer can optimize many of these steps, either by caching the
  99  * results for future calls, by collapsing multiple virtual steps into
 100  * a single operation, or by recognizing various attributes as common
 101  * simple cases that can be eliminated by modifying other parts of the


 112  * manipulation methods of {@code Graphics} and
 113  * {@code Graphics2D}.  This <i>user clip</i>
 114  * is transformed into device space by the current
 115  * {@code Transform} and combined with the
 116  * <i>device clip</i>, which is defined by the visibility of windows and
 117  * device extents.  The combination of the user clip and device clip
 118  * defines the <i>composite clip</i>, which determines the final clipping
 119  * region.  The user clip is not modified by the rendering
 120  * system to reflect the resulting composite clip.
 121  * <li>
 122  * Determine what colors to render.
 123  * <li>
 124  * Apply the colors to the destination drawing surface using the current
 125  * {@link Composite} attribute in the {@code Graphics2D} context.
 126  * </ol>
 127  * <br>
 128  * The three types of rendering operations, along with details of each
 129  * of their particular rendering processes are:
 130  * <ol>
 131  * <li>
 132  * <b><a id="rendershape">{@code Shape} operations</a></b>
 133  * <ol>
 134  * <li>
 135  * If the operation is a {@code draw(Shape)} operation, then
 136  * the  {@link Stroke#createStrokedShape(Shape) createStrokedShape}
 137  * method on the current {@link Stroke} attribute in the
 138  * {@code Graphics2D} context is used to construct a new
 139  * {@code Shape} object that contains the outline of the specified
 140  * {@code Shape}.
 141  * <li>
 142  * The {@code Shape} is transformed from user space to device space
 143  * using the current {@code Transform}
 144  * in the {@code Graphics2D} context.
 145  * <li>
 146  * The outline of the {@code Shape} is extracted using the
 147  * {@link Shape#getPathIterator(AffineTransform) getPathIterator} method of
 148  * {@code Shape}, which returns a
 149  * {@link java.awt.geom.PathIterator PathIterator}
 150  * object that iterates along the boundary of the {@code Shape}.
 151  * <li>
 152  * If the {@code Graphics2D} object cannot handle the curved segments
 153  * that the {@code PathIterator} object returns then it can call the
 154  * alternate
 155  * {@link Shape#getPathIterator(AffineTransform, double) getPathIterator}
 156  * method of {@code Shape}, which flattens the {@code Shape}.
 157  * <li>
 158  * The current {@link Paint} in the {@code Graphics2D} context
 159  * is queried for a {@link PaintContext}, which specifies the
 160  * colors to render in device space.
 161  * </ol>
 162  * <li>
 163  * <b><a id=rendertext>Text operations</a></b>
 164  * <ol>
 165  * <li>
 166  * The following steps are used to determine the set of glyphs required
 167  * to render the indicated {@code String}:
 168  * <ol>
 169  * <li>
 170  * If the argument is a {@code String}, then the current
 171  * {@code Font} in the {@code Graphics2D} context is asked to
 172  * convert the Unicode characters in the {@code String} into a set of
 173  * glyphs for presentation with whatever basic layout and shaping
 174  * algorithms the font implements.
 175  * <li>
 176  * If the argument is an
 177  * {@link AttributedCharacterIterator},
 178  * the iterator is asked to convert itself to a
 179  * {@link java.awt.font.TextLayout TextLayout}
 180  * using its embedded font attributes. The {@code TextLayout}
 181  * implements more sophisticated glyph layout algorithms that
 182  * perform Unicode bi-directional layout adjustments automatically
 183  * for multiple fonts of differing writing directions.
 184   * <li>
 185  * If the argument is a
 186  * {@link GlyphVector}, then the
 187  * {@code GlyphVector} object already contains the appropriate
 188  * font-specific glyph codes with explicit coordinates for the position of
 189  * each glyph.
 190  * </ol>
 191  * <li>
 192  * The current {@code Font} is queried to obtain outlines for the
 193  * indicated glyphs.  These outlines are treated as shapes in user space
 194  * relative to the position of each glyph that was determined in step 1.
 195  * <li>
 196  * The character outlines are filled as indicated above
 197  * under <a href="#rendershape">{@code Shape} operations</a>.
 198  * <li>
 199  * The current {@code Paint} is queried for a
 200  * {@code PaintContext}, which specifies
 201  * the colors to render in device space.
 202  * </ol>
 203  * <li>
 204  * <b><a id= renderingimage>{@code Image} Operations</a></b>
 205  * <ol>
 206  * <li>
 207  * The region of interest is defined by the bounding box of the source
 208  * {@code Image}.
 209  * This bounding box is specified in Image Space, which is the
 210  * {@code Image} object's local coordinate system.
 211  * <li>
 212  * If an {@code AffineTransform} is passed to
 213  * {@link #drawImage(java.awt.Image, java.awt.geom.AffineTransform, java.awt.image.ImageObserver) drawImage(Image, AffineTransform, ImageObserver)},
 214  * the {@code AffineTransform} is used to transform the bounding
 215  * box from image space to user space. If no {@code AffineTransform}
 216  * is supplied, the bounding box is treated as if it is already in user space.
 217  * <li>
 218  * The bounding box of the source {@code Image} is transformed from user
 219  * space into device space using the current {@code Transform}.
 220  * Note that the result of transforming the bounding box does not
 221  * necessarily result in a rectangular region in device space.
 222  * <li>
 223  * The {@code Image} object determines what colors to render,
 224  * sampled according to the source to destination


< prev index next >