Initial Kernel for ODROID-XU patchset

This commit is contained in:
Mauro Ribeiro
2013-08-17 14:39:53 -03:00
parent db97d29385
commit df0c5eea57
2151 changed files with 787078 additions and 15289 deletions

349
COPYING.txt Executable file
View File

@@ -0,0 +1,349 @@
This software contains copyrighted software that is licensed under the GPL.
You may obtain the complete Corresponding Source code from us for a period of three years after our last shipment of this product by sending email to:
oss.request@samsung.com
This offer is valid to anyone in receipt of this information.
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.
59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Library General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.
<signature of Ty Coon>, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Library General
Public License instead of this License.

View File

@@ -1,7 +0,0 @@
filesystems/dnotify_test
laptops/dslm
timers/hpet_example
vm/hugepage-mmap
vm/hugepage-shm
vm/map_hugetlb

View File

@@ -96,16 +96,26 @@ Description:
is read-only. If the device is not enabled to wake up the
system from sleep states, this attribute is not present.
What: /sys/devices/.../power/wakeup_hit_count
Date: September 2010
What: /sys/devices/.../power/wakeup_abort_count
Date: February 2012
Contact: Rafael J. Wysocki <rjw@sisk.pl>
Description:
The /sys/devices/.../wakeup_hit_count attribute contains the
The /sys/devices/.../wakeup_abort_count attribute contains the
number of times the processing of a wakeup event associated with
the device might prevent the system from entering a sleep state.
This attribute is read-only. If the device is not enabled to
wake up the system from sleep states, this attribute is not
present.
the device might have aborted system transition into a sleep
state in progress. This attribute is read-only. If the device
is not enabled to wake up the system from sleep states, this
attribute is not present.
What: /sys/devices/.../power/wakeup_expire_count
Date: February 2012
Contact: Rafael J. Wysocki <rjw@sisk.pl>
Description:
The /sys/devices/.../wakeup_expire_count attribute contains the
number of times a wakeup event associated with the device has
been reported with a timeout that expired. This attribute is
read-only. If the device is not enabled to wake up the system
from sleep states, this attribute is not present.
What: /sys/devices/.../power/wakeup_active
Date: September 2010
@@ -148,6 +158,17 @@ Description:
not enabled to wake up the system from sleep states, this
attribute is not present.
What: /sys/devices/.../power/wakeup_prevent_sleep_time_ms
Date: February 2012
Contact: Rafael J. Wysocki <rjw@sisk.pl>
Description:
The /sys/devices/.../wakeup_prevent_sleep_time_ms attribute
contains the total time the device has been preventing
opportunistic transitions to sleep states from occuring.
This attribute is read-only. If the device is not enabled to
wake up the system from sleep states, this attribute is not
present.
What: /sys/devices/.../power/autosuspend_delay_ms
Date: September 2010
Contact: Alan Stern <stern@rowland.harvard.edu>

View File

@@ -172,3 +172,62 @@ Description:
Reading from this file will display the current value, which is
set to 1 MB by default.
What: /sys/power/autosleep
Date: April 2012
Contact: Rafael J. Wysocki <rjw@sisk.pl>
Description:
The /sys/power/autosleep file can be written one of the strings
returned by reads from /sys/power/state. If that happens, a
work item attempting to trigger a transition of the system to
the sleep state represented by that string is queued up. This
attempt will only succeed if there are no active wakeup sources
in the system at that time. After every execution, regardless
of whether or not the attempt to put the system to sleep has
succeeded, the work item requeues itself until user space
writes "off" to /sys/power/autosleep.
Reading from this file causes the last string successfully
written to it to be returned.
What: /sys/power/wake_lock
Date: February 2012
Contact: Rafael J. Wysocki <rjw@sisk.pl>
Description:
The /sys/power/wake_lock file allows user space to create
wakeup source objects and activate them on demand (if one of
those wakeup sources is active, reads from the
/sys/power/wakeup_count file block or return false). When a
string without white space is written to /sys/power/wake_lock,
it will be assumed to represent a wakeup source name. If there
is a wakeup source object with that name, it will be activated
(unless active already). Otherwise, a new wakeup source object
will be registered, assigned the given name and activated.
If a string written to /sys/power/wake_lock contains white
space, the part of the string preceding the white space will be
regarded as a wakeup source name and handled as descrived above.
The other part of the string will be regarded as a timeout (in
nanoseconds) such that the wakeup source will be automatically
deactivated after it has expired. The timeout, if present, is
set regardless of the current state of the wakeup source object
in question.
Reads from this file return a string consisting of the names of
wakeup sources created with the help of it that are active at
the moment, separated with spaces.
What: /sys/power/wake_unlock
Date: February 2012
Contact: Rafael J. Wysocki <rjw@sisk.pl>
Description:
The /sys/power/wake_unlock file allows user space to deactivate
wakeup sources created with the help of /sys/power/wake_lock.
When a string is written to /sys/power/wake_unlock, it will be
assumed to represent the name of a wakeup source to deactivate.
If a wakeup source object of that name exists and is active at
the moment, it will be deactivated.
Reads from this file return a string consisting of the names of
wakeup sources created with the help of /sys/power/wake_lock
that are inactive at the moment, separated with spaces.

View File

@@ -1,14 +0,0 @@
*.xml
*.ps
*.pdf
*.html
*.9.gz
*.9
*.aux
*.dvi
*.log
*.out
*.png
*.gif
media-indices.tmpl
media-entities.tmpl

View File

@@ -1 +0,0 @@
!*.xml

View File

@@ -1 +0,0 @@
!*.xml

View File

@@ -2410,6 +2410,35 @@ details.</para>
</orderedlist>
</section>
<section>
<title>V4L2 in Linux 3.5</title>
<orderedlist>
<listitem>
<para>Added integer menus, the new type will be
V4L2_CTRL_TYPE_INTEGER_MENU.</para>
</listitem>
<listitem>
<para>Added selection API for V4L2 subdev interface:
&VIDIOC-SUBDEV-G-SELECTION; and
&VIDIOC-SUBDEV-S-SELECTION;.</para>
</listitem>
<listitem>
<para> Added <constant>V4L2_COLORFX_ANTIQUE</constant>,
<constant>V4L2_COLORFX_ART_FREEZE</constant>,
<constant>V4L2_COLORFX_AQUA</constant>,
<constant>V4L2_COLORFX_SILHOUETTE</constant>,
<constant>V4L2_COLORFX_SOLARIZATION</constant>,
<constant>V4L2_COLORFX_VIVID</constant> and
<constant>V4L2_COLORFX_ARBITRARY_CBCR</constant> menu items
to the <constant>V4L2_CID_COLORFX</constant> control.</para>
</listitem>
<listitem>
<para> Added <constant>V4L2_CID_COLORFX_CBCR</constant> control.</para>
</listitem>
</orderedlist>
</section>
>>>>>>> 6491d1a... [media] V4L: Extend V4L2_CID_COLORFX with more image effects
<section id="other">
<title>Relation of V4L2 to other Linux multimedia APIs</title>

View File

@@ -285,18 +285,92 @@ minimum value disables backlight compensation.</entry>
<row id="v4l2-colorfx">
<entry><constant>V4L2_CID_COLORFX</constant></entry>
<entry>enum</entry>
<entry>Selects a color effect. Possible values for
<constant>enum v4l2_colorfx</constant> are:
<constant>V4L2_COLORFX_NONE</constant> (0),
<constant>V4L2_COLORFX_BW</constant> (1),
<constant>V4L2_COLORFX_SEPIA</constant> (2),
<constant>V4L2_COLORFX_NEGATIVE</constant> (3),
<constant>V4L2_COLORFX_EMBOSS</constant> (4),
<constant>V4L2_COLORFX_SKETCH</constant> (5),
<constant>V4L2_COLORFX_SKY_BLUE</constant> (6),
<constant>V4L2_COLORFX_GRASS_GREEN</constant> (7),
<constant>V4L2_COLORFX_SKIN_WHITEN</constant> (8) and
<constant>V4L2_COLORFX_VIVID</constant> (9).</entry>
<entry>Selects a color effect. The following values are defined:
</entry>
</row><row>
<entry></entry>
<entry></entry>
<entrytbl spanname="descr" cols="2">
<tbody valign="top">
<row>
<entry><constant>V4L2_COLORFX_NONE</constant>&nbsp;</entry>
<entry>Color effect is disabled.</entry>
</row>
<row>
<entry><constant>V4L2_COLORFX_ANTIQUE</constant>&nbsp;</entry>
<entry>An aging (old photo) effect.</entry>
</row>
<row>
<entry><constant>V4L2_COLORFX_ART_FREEZE</constant>&nbsp;</entry>
<entry>Frost color effect.</entry>
</row>
<row>
<entry><constant>V4L2_COLORFX_AQUA</constant>&nbsp;</entry>
<entry>Water color, cool tone.</entry>
</row>
<row>
<entry><constant>V4L2_COLORFX_BW</constant>&nbsp;</entry>
<entry>Black and white.</entry>
</row>
<row>
<entry><constant>V4L2_COLORFX_EMBOSS</constant>&nbsp;</entry>
<entry>Emboss, the highlights and shadows replace light/dark boundaries
and low contrast areas are set to a gray background.</entry>
</row>
<row>
<entry><constant>V4L2_COLORFX_GRASS_GREEN</constant>&nbsp;</entry>
<entry>Grass green.</entry>
</row>
<row>
<entry><constant>V4L2_COLORFX_NEGATIVE</constant>&nbsp;</entry>
<entry>Negative.</entry>
</row>
<row>
<entry><constant>V4L2_COLORFX_SEPIA</constant>&nbsp;</entry>
<entry>Sepia tone.</entry>
</row>
<row>
<entry><constant>V4L2_COLORFX_SKETCH</constant>&nbsp;</entry>
<entry>Sketch.</entry>
</row>
<row>
<entry><constant>V4L2_COLORFX_SKIN_WHITEN</constant>&nbsp;</entry>
<entry>Skin whiten.</entry>
</row>
<row>
<entry><constant>V4L2_COLORFX_SKY_BLUE</constant>&nbsp;</entry>
<entry>Sky blue.</entry>
</row>
<row>
<entry><constant>V4L2_COLORFX_SOLARIZATION</constant>&nbsp;</entry>
<entry>Solarization, the image is partially reversed in tone,
only color values above or below a certain threshold are inverted.
</entry>
</row>
<row>
<entry><constant>V4L2_COLORFX_SILHOUETTE</constant>&nbsp;</entry>
<entry>Silhouette (outline).</entry>
</row>
<row>
<entry><constant>V4L2_COLORFX_VIVID</constant>&nbsp;</entry>
<entry>Vivid colors.</entry>
</row>
<row>
<entry><constant>V4L2_COLORFX_SET_CBCR</constant>&nbsp;</entry>
<entry>The Cb and Cr chroma components are replaced by fixed
coefficients determined by <constant>V4L2_CID_COLORFX_CBCR</constant>
control.</entry>
</row>
</tbody>
</entrytbl>
</row>
<row>
<entry><constant>V4L2_CID_COLORFX_CBCR</constant></entry>
<entry>integer</entry>
<entry>Determines the Cb and Cr coefficients for <constant>V4L2_COLORFX_SET_CBCR</constant>
color effect. Bits [7:0] of the supplied 32 bit value are interpreted as
Cr component, bits [15:8] as Cb component and bits [31:16] must be zero.
</entry>
</row>
<row>
<entry><constant>V4L2_CID_ROTATE</constant></entry>

View File

@@ -127,6 +127,16 @@ structs, ioctls) must be noted in more detail in the history chapter
(compat.xml), along with the possible impact on existing drivers and
applications. -->
<revision>
<revnumber>3.5</revnumber>
<date>2012-04-02</date>
<authorinitials>sa, sn</authorinitials>
<revremark>Added V4L2_CTRL_TYPE_INTEGER_MENU and V4L2 subdev
selections API. Improved the description of V4L2_CID_COLORFX
control, added V4L2_CID_COLORFX_CBCR control.
</revremark>
</revision>
<revision>
<revnumber>3.4</revnumber>
<date>2012-01-25</date>

View File

@@ -1 +0,0 @@
getdelays

121
Documentation/android.txt Normal file
View File

@@ -0,0 +1,121 @@
=============
A N D R O I D
=============
Copyright (C) 2009 Google, Inc.
Written by Mike Chan <mike@android.com>
CONTENTS:
---------
1. Android
1.1 Required enabled config options
1.2 Required disabled config options
1.3 Recommended enabled config options
2. Contact
1. Android
==========
Android (www.android.com) is an open source operating system for mobile devices.
This document describes configurations needed to run the Android framework on
top of the Linux kernel.
To see a working defconfig look at msm_defconfig or goldfish_defconfig
which can be found at http://android.git.kernel.org in kernel/common.git
and kernel/msm.git
1.1 Required enabled config options
-----------------------------------
After building a standard defconfig, ensure that these options are enabled in
your .config or defconfig if they are not already. Based off the msm_defconfig.
You should keep the rest of the default options enabled in the defconfig
unless you know what you are doing.
ANDROID_PARANOID_NETWORK
ASHMEM
CONFIG_FB_MODE_HELPERS
CONFIG_FONT_8x16
CONFIG_FONT_8x8
CONFIG_YAFFS_SHORT_NAMES_IN_RAM
DAB
EARLYSUSPEND
FB
FB_CFB_COPYAREA
FB_CFB_FILLRECT
FB_CFB_IMAGEBLIT
FB_DEFERRED_IO
FB_TILEBLITTING
HIGH_RES_TIMERS
INOTIFY
INOTIFY_USER
INPUT_EVDEV
INPUT_GPIO
INPUT_MISC
LEDS_CLASS
LEDS_GPIO
LOCK_KERNEL
LkOGGER
LOW_MEMORY_KILLER
MISC_DEVICES
NEW_LEDS
NO_HZ
POWER_SUPPLY
PREEMPT
RAMFS
RTC_CLASS
RTC_LIB
SWITCH
SWITCH_GPIO
TMPFS
UID_STAT
UID16
USB_FUNCTION
USB_FUNCTION_ADB
USER_WAKELOCK
VIDEO_OUTPUT_CONTROL
WAKELOCK
YAFFS_AUTO_YAFFS2
YAFFS_FS
YAFFS_YAFFS1
YAFFS_YAFFS2
1.2 Required disabled config options
------------------------------------
CONFIG_YAFFS_DISABLE_LAZY_LOAD
DNOTIFY
1.3 Recommended enabled config options
------------------------------
ANDROID_PMEM
ANDROID_RAM_CONSOLE
ANDROID_RAM_CONSOLE_ERROR_CORRECTION
SCHEDSTATS
DEBUG_PREEMPT
DEBUG_MUTEXES
DEBUG_SPINLOCK_SLEEP
DEBUG_INFO
FRAME_POINTER
CPU_FREQ
CPU_FREQ_TABLE
CPU_FREQ_DEFAULT_GOV_ONDEMAND
CPU_FREQ_GOV_ONDEMAND
CRC_CCITT
EMBEDDED
INPUT_TOUCHSCREEN
I2C
I2C_BOARDINFO
LOG_BUF_SHIFT=17
SERIAL_CORE
SERIAL_CORE_CONSOLE
2. Contact
==========
website: http://android.git.kernel.org
mailing-lists: android-kernel@googlegroups.com

View File

@@ -1 +0,0 @@
cfag12864b-example

View File

@@ -592,6 +592,15 @@ there are not tasks in the cgroup. If pre_destroy() returns error code,
rmdir() will fail with it. From this behavior, pre_destroy() can be
called multiple times against a cgroup.
int allow_attach(struct cgroup *cgrp, struct cgroup_taskset *tset)
(cgroup_mutex held by caller)
Called prior to moving a task into a cgroup; if the subsystem
returns an error, this will abort the attach operation. Used
to extend the permission checks - if all subsystems in a cgroup
return 0, the attach will be allowed to proceed, even if the
default permission check (root or same user) fails.
int can_attach(struct cgroup *cgrp, struct cgroup_taskset *tset)
(cgroup_mutex held by caller)

View File

@@ -1 +0,0 @@
ucon

View File

@@ -0,0 +1,623 @@
-*- org -*-
* Contiguous Memory Allocator
The Contiguous Memory Allocator (CMA) is a framework, which allows
setting up a machine-specific configuration for physically-contiguous
memory management. Memory for devices is then allocated according
to that configuration.
The main role of the framework is not to allocate memory, but to
parse and manage memory configurations, as well as to act as an
in-between between device drivers and pluggable allocators. It is
thus not tied to any memory allocation method or strategy.
** Why is it needed?
Various devices on embedded systems have no scatter-getter and/or
IO map support and as such require contiguous blocks of memory to
operate. They include devices such as cameras, hardware video
decoders and encoders, etc.
Such devices often require big memory buffers (a full HD frame is,
for instance, more then 2 mega pixels large, i.e. more than 6 MB
of memory), which makes mechanisms such as kmalloc() ineffective.
Some embedded devices impose additional requirements on the
buffers, e.g. they can operate only on buffers allocated in
particular location/memory bank (if system has more than one
memory bank) or buffers aligned to a particular memory boundary.
Development of embedded devices have seen a big rise recently
(especially in the V4L area) and many such drivers include their
own memory allocation code. Most of them use bootmem-based methods.
CMA framework is an attempt to unify contiguous memory allocation
mechanisms and provide a simple API for device drivers, while
staying as customisable and modular as possible.
** Design
The main design goal for the CMA was to provide a customisable and
modular framework, which could be configured to suit the needs of
individual systems. Configuration specifies a list of memory
regions, which then are assigned to devices. Memory regions can
be shared among many device drivers or assigned exclusively to
one. This has been achieved in the following ways:
1. The core of the CMA does not handle allocation of memory and
management of free space. Dedicated allocators are used for
that purpose.
This way, if the provided solution does not match demands
imposed on a given system, one can develop a new algorithm and
easily plug it into the CMA framework.
The presented solution includes an implementation of a best-fit
algorithm.
2. When requesting memory, devices have to introduce themselves.
This way CMA knows who the memory is allocated for. This
allows for the system architect to specify which memory regions
each device should use.
3. Memory regions are grouped in various "types". When device
requests a chunk of memory, it can specify what type of memory
it needs. If no type is specified, "common" is assumed.
This makes it possible to configure the system in such a way,
that a single device may get memory from different memory
regions, depending on the "type" of memory it requested. For
example, a video codec driver might want to allocate some
shared buffers from the first memory bank and the other from
the second to get the highest possible memory throughput.
4. For greater flexibility and extensibility, the framework allows
device drivers to register private regions of reserved memory
which then may be used only by them.
As an effect, if a driver would not use the rest of the CMA
interface, it can still use CMA allocators and other
mechanisms.
4a. Early in boot process, device drivers can also request the
CMA framework to a reserve a region of memory for them
which then will be used as a private region.
This way, drivers do not need to directly call bootmem,
memblock or similar early allocator but merely register an
early region and the framework will handle the rest
including choosing the right early allocator.
4. CMA allows a run-time configuration of the memory regions it
will use to allocate chunks of memory from. The set of memory
regions is given on command line so it can be easily changed
without the need for recompiling the kernel.
Each region has it's own size, alignment demand, a start
address (physical address where it should be placed) and an
allocator algorithm assigned to the region.
This means that there can be different algorithms running at
the same time, if different devices on the platform have
distinct memory usage characteristics and different algorithm
match those the best way.
** Use cases
Let's analyse some imaginary system that uses the CMA to see how
the framework can be used and configured.
We have a platform with a hardware video decoder and a camera each
needing 20 MiB of memory in the worst case. Our system is written
in such a way though that the two devices are never used at the
same time and memory for them may be shared. In such a system the
following configuration would be used in the platform
initialisation code:
static struct cma_region regions[] = {
{ .name = "region", .size = 20 << 20 },
{ }
}
static const char map[] __initconst = "video,camera=region";
cma_set_defaults(regions, map);
The regions array defines a single 20-MiB region named "region".
The map says that drivers named "video" and "camera" are to be
granted memory from the previously defined region.
A shorter map can be used as well:
static const char map[] __initconst = "*=region";
The asterisk ("*") matches all devices thus all devices will use
the region named "region".
We can see, that because the devices share the same memory region,
we save 20 MiB, compared to the situation when each of the devices
would reserve 20 MiB of memory for itself.
Now, let's say that we have also many other smaller devices and we
want them to share some smaller pool of memory. For instance 5
MiB. This can be achieved in the following way:
static struct cma_region regions[] = {
{ .name = "region", .size = 20 << 20 },
{ .name = "common", .size = 5 << 20 },
{ }
}
static const char map[] __initconst =
"video,camera=region;*=common";
cma_set_defaults(regions, map);
This instructs CMA to reserve two regions and let video and camera
use region "region" whereas all other devices should use region
"common".
Later on, after some development of the system, it can now run
video decoder and camera at the same time. The 20 MiB region is
no longer enough for the two to share. A quick fix can be made to
grant each of those devices separate regions:
static struct cma_region regions[] = {
{ .name = "v", .size = 20 << 20 },
{ .name = "c", .size = 20 << 20 },
{ .name = "common", .size = 5 << 20 },
{ }
}
static const char map[] __initconst = "video=v;camera=c;*=common";
cma_set_defaults(regions, map);
This solution also shows how with CMA you can assign private pools
of memory to each device if that is required.
Allocation mechanisms can be replaced dynamically in a similar
manner as well. Let's say that during testing, it has been
discovered that, for a given shared region of 40 MiB,
fragmentation has become a problem. It has been observed that,
after some time, it becomes impossible to allocate buffers of the
required sizes. So to satisfy our requirements, we would have to
reserve a larger shared region beforehand.
But fortunately, you have also managed to develop a new allocation
algorithm -- Neat Allocation Algorithm or "na" for short -- which
satisfies the needs for both devices even on a 30 MiB region. The
configuration can be then quickly changed to:
static struct cma_region regions[] = {
{ .name = "region", .size = 30 << 20, .alloc_name = "na" },
{ .name = "common", .size = 5 << 20 },
{ }
}
static const char map[] __initconst = "video,camera=region;*=common";
cma_set_defaults(regions, map);
This shows how you can develop your own allocation algorithms if
the ones provided with CMA do not suit your needs and easily
replace them, without the need to modify CMA core or even
recompiling the kernel.
** Technical Details
*** The attributes
As shown above, CMA is configured by a two attributes: list
regions and map. The first one specifies regions that are to be
reserved for CMA. The second one specifies what regions each
device is assigned to.
**** Regions
Regions is a list of regions terminated by a region with size
equal zero. The following fields may be set:
- size -- size of the region (required, must not be zero)
- alignment -- alignment of the region; must be power of two or
zero (optional)
- start -- where the region has to start (optional)
- alloc_name -- the name of allocator to use (optional)
- alloc -- allocator to use (optional; and besides
alloc_name is probably is what you want)
size, alignment and start is specified in bytes. Size will be
aligned up to a PAGE_SIZE. If alignment is less then a PAGE_SIZE
it will be set to a PAGE_SIZE. start will be aligned to
alignment.
If command line parameter support is enabled, this attribute can
also be overriden by a command line "cma" parameter. When given
on command line its forrmat is as follows:
regions-attr ::= [ regions [ ';' ] ]
regions ::= region [ ';' regions ]
region ::= REG-NAME
'=' size
[ '@' start ]
[ '/' alignment ]
[ ':' ALLOC-NAME ]
size ::= MEMSIZE // size of the region
start ::= MEMSIZE // desired start address of
// the region
alignment ::= MEMSIZE // alignment of the start
// address of the region
REG-NAME specifies the name of the region. All regions given at
via the regions attribute need to have a name. Moreover, all
regions need to have a unique name. If two regions have the same
name it is unspecified which will be used when requesting to
allocate memory from region with given name.
ALLOC-NAME specifies the name of allocator to be used with the
region. If no allocator name is provided, the "default"
allocator will be used with the region. The "default" allocator
is, of course, the first allocator that has been registered. ;)
size, start and alignment are specified in bytes with suffixes
that memparse() accept. If start is given, the region will be
reserved on given starting address (or at close to it as
possible). If alignment is specified, the region will be aligned
to given value.
**** Map
The format of the "map" attribute is as follows:
map-attr ::= [ rules [ ';' ] ]
rules ::= rule [ ';' rules ]
rule ::= patterns '=' regions
patterns ::= pattern [ ',' patterns ]
regions ::= REG-NAME [ ',' regions ]
// list of regions to try to allocate memory
// from
pattern ::= dev-pattern [ '/' TYPE-NAME ] | '/' TYPE-NAME
// pattern request must match for the rule to
// apply; the first rule that matches is
// applied; if dev-pattern part is omitted
// value identical to the one used in previous
// pattern is assumed.
dev-pattern ::= PATTERN
// pattern that device name must match for the
// rule to apply; may contain question marks
// which mach any characters and end with an
// asterisk which match the rest of the string
// (including nothing).
It is a sequence of rules which specify what regions should given
(device, type) pair use. The first rule that matches is applied.
For rule to match, the pattern must match (dev, type) pair.
Pattern consist of the part before and after slash. The first
part must match device name and the second part must match kind.
If the first part is empty, the device name is assumed to match
iff it matched in previous pattern. If the second part is
omitted it will mach any type of memory requested by device.
If SysFS support is enabled, this attribute is accessible via
SysFS and can be changed at run-time by writing to
/sys/kernel/mm/contiguous/map.
If command line parameter support is enabled, this attribute can
also be overriden by a command line "cma.map" parameter.
**** Examples
Some examples (whitespace added for better readability):
cma = r1 = 64M // 64M region
@512M // starting at address 512M
// (or at least as near as possible)
/1M // make sure it's aligned to 1M
:foo(bar); // uses allocator "foo" with "bar"
// as parameters for it
r2 = 64M // 64M region
/1M; // make sure it's aligned to 1M
// uses the first available allocator
r3 = 64M // 64M region
@512M // starting at address 512M
:foo; // uses allocator "foo" with no parameters
cma_map = foo = r1;
// device foo with kind==NULL uses region r1
foo/quaz = r2; // OR:
/quaz = r2;
// device foo with kind == "quaz" uses region r2
cma_map = foo/quaz = r1;
// device foo with type == "quaz" uses region r1
foo/* = r2; // OR:
/* = r2;
// device foo with any other kind uses region r2
bar = r1,r2;
// device bar uses region r1 or r2
baz?/a , baz?/b = r3;
// devices named baz? where ? is any character
// with type being "a" or "b" use r3
*** The device and types of memory
The name of the device is taken from the device structure. It is
not possible to use CMA if driver does not register a device
(actually this can be overcome if a fake device structure is
provided with at least the name set).
The type of memory is an optional argument provided by the device
whenever it requests memory chunk. In many cases this can be
ignored but sometimes it may be required for some devices.
For instance, let's say that there are two memory banks and for
performance reasons a device uses buffers in both of them.
Platform defines a memory types "a" and "b" for regions in both
banks. The device driver would use those two types then to
request memory chunks from different banks. CMA attributes could
look as follows:
static struct cma_region regions[] = {
{ .name = "a", .size = 32 << 20 },
{ .name = "b", .size = 32 << 20, .start = 512 << 20 },
{ }
}
static const char map[] __initconst = "foo/a=a;foo/b=b;*=a,b";
And whenever the driver allocated the memory it would specify the
kind of memory:
buffer1 = cma_alloc(dev, "a", 1 << 20, 0);
buffer2 = cma_alloc(dev, "b", 1 << 20, 0);
If it was needed to try to allocate from the other bank as well if
the dedicated one is full, the map attributes could be changed to:
static const char map[] __initconst = "foo/a=a,b;foo/b=b,a;*=a,b";
On the other hand, if the same driver was used on a system with
only one bank, the configuration could be changed just to:
static struct cma_region regions[] = {
{ .name = "r", .size = 64 << 20 },
{ }
}
static const char map[] __initconst = "*=r";
without the need to change the driver at all.
*** Device API
There are three basic calls provided by the CMA framework to
devices. To allocate a chunk of memory cma_alloc() function needs
to be used:
dma_addr_t cma_alloc(const struct device *dev, const char *type,
size_t size, dma_addr_t alignment);
If required, device may specify alignment in bytes that the chunk
need to satisfy. It have to be a power of two or zero. The
chunks are always aligned at least to a page.
The type specifies the type of memory as described to in the
previous subsection. If device driver does not care about memory
type it can safely pass NULL as the type which is the same as
possing "common".
The basic usage of the function is just a:
addr = cma_alloc(dev, NULL, size, 0);
The function returns bus address of allocated chunk or a value
that evaluates to true if checked with IS_ERR_VALUE(), so the
correct way for checking for errors is:
unsigned long addr = cma_alloc(dev, NULL, size, 0);
if (IS_ERR_VALUE(addr))
/* Error */
return (int)addr;
/* Allocated */
(Make sure to include <linux/err.h> which contains the definition
of the IS_ERR_VALUE() macro.)
Allocated chunk is freed via a cma_free() function:
int cma_free(dma_addr_t addr);
It takes bus address of the chunk as an argument frees it.
The last function is the cma_info() which returns information
about regions assigned to given (dev, type) pair. Its syntax is:
int cma_info(struct cma_info *info,
const struct device *dev,
const char *type);
On successful exit it fills the info structure with lower and
upper bound of regions, total size and number of regions assigned
to given (dev, type) pair.
**** Dynamic and private regions
In the basic setup, regions are provided and initialised by
platform initialisation code (which usually use
cma_set_defaults() for that purpose).
It is, however, possible to create and add regions dynamically
using cma_region_register() function.
int cma_region_register(struct cma_region *reg);
The region does not have to have name. If it does not, it won't
be accessed via standard mapping (the one provided with map
attribute). Such regions are private and to allocate chunk from
them, one needs to call:
dma_addr_t cma_alloc_from_region(struct cma_region *reg,
size_t size, dma_addr_t alignment);
It is just like cma_alloc() expect one specifies what region to
allocate memory from. The region must have been registered.
**** Allocating from region specified by name
If a driver preferred allocating from a region or list of regions
it knows name of it can use a different call simmilar to the
previous:
dma_addr_t cma_alloc_from(const char *regions,
size_t size, dma_addr_t alignment);
The first argument is a comma-separated list of regions the
driver desires CMA to try and allocate from. The list is
terminated by a NUL byte or a semicolon.
Similarly, there is a call for requesting information about named
regions:
int cma_info_about(struct cma_info *info, const char *regions);
Generally, it should not be needed to use those interfaces but
they are provided nevertheless.
**** Registering early regions
An early region is a region that is managed by CMA early during
boot process. It's platforms responsibility to reserve memory
for early regions. Later on, when CMA initialises, early regions
with reserved memory are registered as normal regions.
Registering an early region may be a way for a device to request
a private pool of memory without worrying about actually
reserving the memory:
int cma_early_region_register(struct cma_region *reg);
This needs to be done quite early on in boot process, before
platform traverses the cma_early_regions list to reserve memory.
When boot process ends, device driver may see whether the region
was reserved (by checking reg->reserved flag) and if so, whether
it was successfully registered as a normal region (by checking
the reg->registered flag). If that is the case, device driver
can use normal API calls to use the region.
*** Allocator operations
Creating an allocator for CMA needs four functions to be
implemented.
The first two are used to initialise an allocator for given driver
and clean up afterwards:
int cma_foo_init(struct cma_region *reg);
void cma_foo_cleanup(struct cma_region *reg);
The first is called when allocator is attached to region. When
the function is called, the cma_region structure is fully
initialised (ie. starting address and size have correct values).
As a meter of fact, allocator should never modify the cma_region
structure other then the private_data field which it may use to
point to it's private data.
The second call cleans up and frees all resources the allocator
has allocated for the region. The function can assume that all
chunks allocated form this region have been freed thus the whole
region is free.
The two other calls are used for allocating and freeing chunks.
They are:
struct cma_chunk *cma_foo_alloc(struct cma_region *reg,
size_t size, dma_addr_t alignment);
void cma_foo_free(struct cma_chunk *chunk);
As names imply the first allocates a chunk and the other frees
a chunk of memory. It also manages a cma_chunk object
representing the chunk in physical memory.
Either of those function can assume that they are the only thread
accessing the region. Therefore, allocator does not need to worry
about concurrency. Moreover, all arguments are guaranteed to be
valid (i.e. page aligned size, a power of two alignment no lower
the a page size).
When allocator is ready, all that is left is to register it by
calling cma_allocator_register() function:
int cma_allocator_register(struct cma_allocator *alloc);
The argument is an structure with pointers to the above functions
and allocator's name. The whole call may look something like
this:
static struct cma_allocator alloc = {
.name = "foo",
.init = cma_foo_init,
.cleanup = cma_foo_cleanup,
.alloc = cma_foo_alloc,
.free = cma_foo_free,
};
return cma_allocator_register(&alloc);
The name ("foo") will be used when a this particular allocator is
requested as an allocator for given region.
*** Integration with platform
There is one function that needs to be called form platform
initialisation code. That is the cma_early_regions_reserve()
function:
void cma_early_regions_reserve(int (*reserve)(struct cma_region *reg));
It traverses list of all of the early regions provided by platform
and registered by drivers and reserves memory for them. The only
argument is a callback function used to reserve the region.
Passing NULL as the argument is the same as passing
cma_early_region_reserve() function which uses bootmem and
memblock for allocating.
Alternatively, platform code could traverse the cma_early_regions
list by itself but this should never be necessary.
Platform has also a way of providing default attributes for CMA,
cma_set_defaults() function is used for that purpose:
int cma_set_defaults(struct cma_region *regions, const char *map)
It needs to be called after early params have been parsed but
prior to reserving regions. It let one specify the list of
regions defined by platform and the map attribute. The map may
point to a string in __initdata. See above in this document for
example usage of this function.
** Future work
In the future, implementation of mechanisms that would allow the
free space inside the regions to be used as page cache, filesystem
buffers or swap devices is planned. With such mechanisms, the
memory would not be wasted when not used.
Because all allocations and freeing of chunks pass the CMA
framework it can follow what parts of the reserved memory are
freed and what parts are allocated. Tracking the unused memory
would let CMA use it for other purposes such as page cache, I/O
buffers, swap, etc.

View File

@@ -28,6 +28,7 @@ Contents:
2.3 Userspace
2.4 Ondemand
2.5 Conservative
2.6 Interactive
3. The Governor Interface in the CPUfreq Core
@@ -191,6 +192,64 @@ governor but for the opposite direction. For example when set to its
default value of '20' it means that if the CPU usage needs to be below
20% between samples to have the frequency decreased.
2.6 Interactive
---------------
The CPUfreq governor "interactive" is designed for latency-sensitive,
interactive workloads. This governor sets the CPU speed depending on
usage, similar to "ondemand" and "conservative" governors. However,
the governor is more aggressive about scaling the CPU speed up in
response to CPU-intensive activity.
Sampling the CPU load every X ms can lead to under-powering the CPU
for X ms, leading to dropped frames, stuttering UI, etc. Instead of
sampling the cpu at a specified rate, the interactive governor will
check whether to scale the cpu frequency up soon after coming out of
idle. When the cpu comes out of idle, a timer is configured to fire
within 1-2 ticks. If the cpu is very busy between exiting idle and
when the timer fires then we assume the cpu is underpowered and ramp
to MAX speed.
If the cpu was not sufficiently busy to immediately ramp to MAX speed,
then governor evaluates the cpu load since the last speed adjustment,
choosing the highest value between that longer-term load or the
short-term load since idle exit to determine the cpu speed to ramp to.
The tuneable values for this governor are:
min_sample_time: The minimum amount of time to spend at the current
frequency before ramping down. This is to ensure that the governor has
seen enough historic cpu load data to determine the appropriate
workload. Default is 80000 uS.
hispeed_freq: An intermediate "hi speed" at which to initially ramp
when CPU load hits the value specified in go_hispeed_load. If load
stays high for the amount of time specified in above_hispeed_delay,
then speed may be bumped higher. Default is maximum speed.
go_hispeed_load: The CPU load at which to ramp to the intermediate "hi
speed". Default is 85%.
above_hispeed_delay: Once speed is set to hispeed_freq, wait for this
long before bumping speed higher in response to continued high load.
Default is 20000 uS.
timer_rate: Sample rate for reevaluating cpu load when the system is
not idle. Default is 20000 uS.
input_boost: If non-zero, boost speed of all CPUs to hispeed_freq on
touchscreen activity. Default is 0.
boost: If non-zero, immediately boost speed of all CPUs to at least
hispeed_freq until zero is written to this attribute. If zero, allow
CPU speeds to drop below hispeed_freq according to load as usual.
boostpulse: Immediately boost speed of all CPUs to hispeed_freq for
min_sample_time, after which speeds are allowed to drop below
hispeed_freq according to load as usual.
3. The Governor Interface in the CPUfreq Core
=============================================

View File

@@ -29,13 +29,6 @@ The buffer-user
in memory, mapped into its own address space, so it can access the same area
of memory.
*IMPORTANT*: [see https://lkml.org/lkml/2011/12/20/211 for more details]
For this first version, A buffer shared using the dma_buf sharing API:
- *may* be exported to user space using "mmap" *ONLY* by exporter, outside of
this framework.
- with this new iteration of the dma-buf api cpu access from the kernel has been
enable, see below for the details.
dma-buf operations for device dma only
--------------------------------------
@@ -313,6 +306,83 @@ Access to a dma_buf from the kernel context involves three steps:
enum dma_data_direction dir);
Direct Userspace Access/mmap Support
------------------------------------
Being able to mmap an export dma-buf buffer object has 2 main use-cases:
- CPU fallback processing in a pipeline and
- supporting existing mmap interfaces in importers.
1. CPU fallback processing in a pipeline
In many processing pipelines it is sometimes required that the cpu can access
the data in a dma-buf (e.g. for thumbnail creation, snapshots, ...). To avoid
the need to handle this specially in userspace frameworks for buffer sharing
it's ideal if the dma_buf fd itself can be used to access the backing storage
from userspace using mmap.
Furthermore Android's ION framework already supports this (and is otherwise
rather similar to dma-buf from a userspace consumer side with using fds as
handles, too). So it's beneficial to support this in a similar fashion on
dma-buf to have a good transition path for existing Android userspace.
No special interfaces, userspace simply calls mmap on the dma-buf fd.
2. Supporting existing mmap interfaces in exporters
Similar to the motivation for kernel cpu access it is again important that
the userspace code of a given importing subsystem can use the same interfaces
with a imported dma-buf buffer object as with a native buffer object. This is
especially important for drm where the userspace part of contemporary OpenGL,
X, and other drivers is huge, and reworking them to use a different way to
mmap a buffer rather invasive.
The assumption in the current dma-buf interfaces is that redirecting the
initial mmap is all that's needed. A survey of some of the existing
subsystems shows that no driver seems to do any nefarious thing like syncing
up with outstanding asynchronous processing on the device or allocating
special resources at fault time. So hopefully this is good enough, since
adding interfaces to intercept pagefaults and allow pte shootdowns would
increase the complexity quite a bit.
Interface:
int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
unsigned long);
If the importing subsystem simply provides a special-purpose mmap call to set
up a mapping in userspace, calling do_mmap with dma_buf->file will equally
achieve that for a dma-buf object.
3. Implementation notes for exporters
Because dma-buf buffers have invariant size over their lifetime, the dma-buf
core checks whether a vma is too large and rejects such mappings. The
exporter hence does not need to duplicate this check.
Because existing importing subsystems might presume coherent mappings for
userspace, the exporter needs to set up a coherent mapping. If that's not
possible, it needs to fake coherency by manually shooting down ptes when
leaving the cpu domain and flushing caches at fault time. Note that all the
dma_buf files share the same anon inode, hence the exporter needs to replace
the dma_buf file stored in vma->vm_file with it's own if pte shootdown is
requred. This is because the kernel uses the underlying inode's address_space
for vma tracking (and hence pte tracking at shootdown time with
unmap_mapping_range).
If the above shootdown dance turns out to be too expensive in certain
scenarios, we can extend dma-buf with a more explicit cache tracking scheme
for userspace mappings. But the current assumption is that using mmap is
always a slower path, so some inefficiencies should be acceptable.
Exporters that shoot down mappings (for any reasons) shall not do any
synchronization at fault time with outstanding device operations.
Synchronization is an orthogonal issue to sharing the backing storage of a
buffer and hence should not be handled by dma-buf itself. This is explictly
mentioned here because many people seem to want something like this, but if
different exporters handle this differently, buffer sharing can fail in
interesting ways depending upong the exporter (if userspace starts depending
upon this implicit synchronization).
Miscellaneous notes
-------------------
@@ -336,6 +406,20 @@ Miscellaneous notes
the exporting driver to create a dmabuf fd must provide a way to let
userspace control setting of O_CLOEXEC flag passed in to dma_buf_fd().
- If an exporter needs to manually flush caches and hence needs to fake
coherency for mmap support, it needs to be able to zap all the ptes pointing
at the backing storage. Now linux mm needs a struct address_space associated
with the struct file stored in vma->vm_file to do that with the function
unmap_mapping_range. But the dma_buf framework only backs every dma_buf fd
with the anon_file struct file, i.e. all dma_bufs share the same file.
Hence exporters need to setup their own file (and address_space) association
by setting vma->vm_file and adjusting vma->vm_pgoff in the dma_buf mmap
callback. In the specific case of a gem driver the exporter could use the
shmem file already provided by gem (and set vm_pgoff = 0). Exporters can then
zap ptes by unmapping the corresponding range of the struct address_space
associated with their own file.
References:
[1] struct dma_buf_ops in include/linux/dma-buf.h
[2] All interfaces mentioned above defined in include/linux/dma-buf.h

169
Documentation/hid/uhid.txt Normal file
View File

@@ -0,0 +1,169 @@
UHID - User-space I/O driver support for HID subsystem
========================================================
The HID subsystem needs two kinds of drivers. In this document we call them:
1. The "HID I/O Driver" is the driver that performs raw data I/O to the
low-level device. Internally, they register an hid_ll_driver structure with
the HID core. They perform device setup, read raw data from the device and
push it into the HID subsystem and they provide a callback so the HID
subsystem can send data to the device.
2. The "HID Device Driver" is the driver that parses HID reports and reacts on
them. There are generic drivers like "generic-usb" and "generic-bluetooth"
which adhere to the HID specification and provide the standardizes features.
But there may be special drivers and quirks for each non-standard device out
there. Internally, they use the hid_driver structure.
Historically, the USB stack was the first subsystem to provide an HID I/O
Driver. However, other standards like Bluetooth have adopted the HID specs and
may provide HID I/O Drivers, too. The UHID driver allows to implement HID I/O
Drivers in user-space and feed the data into the kernel HID-subsystem.
This allows user-space to operate on the same level as USB-HID, Bluetooth-HID
and similar. It does not provide a way to write HID Device Drivers, though. Use
hidraw for this purpose.
There is an example user-space application in ./samples/uhid/uhid-example.c
The UHID API
------------
UHID is accessed through a character misc-device. The minor-number is allocated
dynamically so you need to rely on udev (or similar) to create the device node.
This is /dev/uhid by default.
If a new device is detected by your HID I/O Driver and you want to register this
device with the HID subsystem, then you need to open /dev/uhid once for each
device you want to register. All further communication is done by read()'ing or
write()'ing "struct uhid_event" objects. Non-blocking operations are supported
by setting O_NONBLOCK.
struct uhid_event {
__u32 type;
union {
struct uhid_create_req create;
struct uhid_data_req data;
...
} u;
};
The "type" field contains the ID of the event. Depending on the ID different
payloads are sent. You must not split a single event across multiple read()'s or
multiple write()'s. A single event must always be sent as a whole. Furthermore,
only a single event can be sent per read() or write(). Pending data is ignored.
If you want to handle multiple events in a single syscall, then use vectored
I/O with readv()/writev().
The first thing you should do is sending an UHID_CREATE event. This will
register the device. UHID will respond with an UHID_START event. You can now
start sending data to and reading data from UHID. However, unless UHID sends the
UHID_OPEN event, the internally attached HID Device Driver has no user attached.
That is, you might put your device asleep unless you receive the UHID_OPEN
event. If you receive the UHID_OPEN event, you should start I/O. If the last
user closes the HID device, you will receive an UHID_CLOSE event. This may be
followed by an UHID_OPEN event again and so on. There is no need to perform
reference-counting in user-space. That is, you will never receive multiple
UHID_OPEN events without an UHID_CLOSE event. The HID subsystem performs
ref-counting for you.
You may decide to ignore UHID_OPEN/UHID_CLOSE, though. I/O is allowed even
though the device may have no users.
If you want to send data to the HID subsystem, you send an HID_INPUT event with
your raw data payload. If the kernel wants to send data to the device, you will
read an UHID_OUTPUT or UHID_OUTPUT_EV event.
If your device disconnects, you should send an UHID_DESTROY event. This will
unregister the device. You can now send UHID_CREATE again to register a new
device.
If you close() the fd, the device is automatically unregistered and destroyed
internally.
write()
-------
write() allows you to modify the state of the device and feed input data into
the kernel. The following types are supported: UHID_CREATE, UHID_DESTROY and
UHID_INPUT. The kernel will parse the event immediately and if the event ID is
not supported, it will return -EOPNOTSUPP. If the payload is invalid, then
-EINVAL is returned, otherwise, the amount of data that was read is returned and
the request was handled successfully.
UHID_CREATE:
This creates the internal HID device. No I/O is possible until you send this
event to the kernel. The payload is of type struct uhid_create_req and
contains information about your device. You can start I/O now.
UHID_DESTROY:
This destroys the internal HID device. No further I/O will be accepted. There
may still be pending messages that you can receive with read() but no further
UHID_INPUT events can be sent to the kernel.
You can create a new device by sending UHID_CREATE again. There is no need to
reopen the character device.
UHID_INPUT:
You must send UHID_CREATE before sending input to the kernel! This event
contains a data-payload. This is the raw data that you read from your device.
The kernel will parse the HID reports and react on it.
UHID_FEATURE_ANSWER:
If you receive a UHID_FEATURE request you must answer with this request. You
must copy the "id" field from the request into the answer. Set the "err" field
to 0 if no error occured or to EIO if an I/O error occurred.
If "err" is 0 then you should fill the buffer of the answer with the results
of the feature request and set "size" correspondingly.
read()
------
read() will return a queued ouput report. These output reports can be of type
UHID_START, UHID_STOP, UHID_OPEN, UHID_CLOSE, UHID_OUTPUT or UHID_OUTPUT_EV. No
reaction is required to any of them but you should handle them according to your
needs. Only UHID_OUTPUT and UHID_OUTPUT_EV have payloads.
UHID_START:
This is sent when the HID device is started. Consider this as an answer to
UHID_CREATE. This is always the first event that is sent.
UHID_STOP:
This is sent when the HID device is stopped. Consider this as an answer to
UHID_DESTROY.
If the kernel HID device driver closes the device manually (that is, you
didn't send UHID_DESTROY) then you should consider this device closed and send
an UHID_DESTROY event. You may want to reregister your device, though. This is
always the last message that is sent to you unless you reopen the device with
UHID_CREATE.
UHID_OPEN:
This is sent when the HID device is opened. That is, the data that the HID
device provides is read by some other process. You may ignore this event but
it is useful for power-management. As long as you haven't received this event
there is actually no other process that reads your data so there is no need to
send UHID_INPUT events to the kernel.
UHID_CLOSE:
This is sent when there are no more processes which read the HID data. It is
the counterpart of UHID_OPEN and you may as well ignore this event.
UHID_OUTPUT:
This is sent if the HID device driver wants to send raw data to the I/O
device. You should read the payload and forward it to the device. The payload
is of type "struct uhid_data_req".
This may be received even though you haven't received UHID_OPEN, yet.
UHID_OUTPUT_EV:
Same as UHID_OUTPUT but this contains a "struct input_event" as payload. This
is called for force-feedback, LED or similar events which are received through
an input device by the HID subsystem. You should convert this into raw reports
and send them to your device similar to events of type UHID_OUTPUT.
UHID_FEATURE:
This event is sent if the kernel driver wants to perform a feature request as
described in the HID specs. The report-type and report-number are available in
the payload.
The kernel serializes feature requests so there will never be two in parallel.
However, if you fail to respond with a UHID_FEATURE_ANSWER in a time-span of 5
seconds, then the requests will be dropped and a new one might be sent.
Therefore, the payload also contains an "id" field that identifies every
request.
Document by:
David Herrmann <dh.herrmann@googlemail.com>

View File

@@ -1 +0,0 @@
aliasing-test

View File

@@ -2372,6 +2372,8 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
resume= [SWSUSP]
Specify the partition device for software suspend
Format:
{/dev/<dev> | PARTUUID=<uuid> | <int>:<int> | <hex>}
resume_offset= [SWSUSP]
Specify the offset from the beginning of the partition

View File

@@ -1 +0,0 @@
ifenslave

View File

@@ -1 +0,0 @@
timestamping

View File

@@ -1 +0,0 @@
crc32hash

View File

@@ -29,7 +29,7 @@ More details follow:
Write 'mem' to
/sys/power/state
syfs file
sysfs file
|
v
Acquire pm_mutex lock

View File

@@ -1,2 +0,0 @@
spidev_fdx
spidev_test

75
Documentation/sync.txt Normal file
View File

@@ -0,0 +1,75 @@
Motivation:
In complicated DMA pipelines such as graphics (multimedia, camera, gpu, display)
a consumer of a buffer needs to know when the producer has finished producing
it. Likewise the producer needs to know when the consumer is finished with the
buffer so it can reuse it. A particular buffer may be consumed by multiple
consumers which will retain the buffer for different amounts of time. In
addition, a consumer may consume multiple buffers atomically.
The sync framework adds an API which allows synchronization between the
producers and consumers in a generic way while also allowing platforms which
have shared hardware synchronization primitives to exploit them.
Goals:
* provide a generic API for expressing synchronization dependencies
* allow drivers to exploit hardware synchronization between hardware
blocks
* provide a userspace API that allows a compositor to manage
dependencies.
* provide rich telemetry data to allow debugging slowdowns and stalls of
the graphics pipeline.
Objects:
* sync_timeline
* sync_pt
* sync_fence
sync_timeline:
A sync_timeline is an abstract monotonically increasing counter. In general,
each driver/hardware block context will have one of these. They can be backed
by the appropriate hardware or rely on the generic sw_sync implementation.
Timelines are only ever created through their specific implementations
(i.e. sw_sync.)
sync_pt:
A sync_pt is an abstract value which marks a point on a sync_timeline. Sync_pts
have a single timeline parent. They have 3 states: active, signaled, and error.
They start in active state and transition, once, to either signaled (when the
timeline counter advances beyond the sync_pts value) or error state.
sync_fence:
Sync_fences are the primary primitives used by drivers to coordinate
synchronization of their buffers. They are a collection of sync_pts which may
or may not have the same timeline parent. A sync_pt can only exist in one fence
and the fence's list of sync_pts is immutable once created. Fences can be
waited on synchronously or asynchronously. Two fences can also be merged to
create a third fence containing a copy of the two fences sync_pts. Fences are
backed by file descriptors to allow userspace to coordinate the display pipeline
dependencies.
Use:
A driver implementing sync support should have a work submission function which:
* takes a fence argument specifying when to begin work
* asynchronously queues that work to kick off when the fence is signaled
* returns a fence to indicate when its work will be done.
* signals the returned fence once the work is completed.
Consider an imaginary display driver that has the following API:
/*
* assumes buf is ready to be displayed.
* blocks until the buffer is on screen.
*/
void display_buffer(struct dma_buf *buf);
The new API will become:
/*
* will display buf when fence is signaled.
* returns immediately with a fence that will signal when buf
* is no longer displayed.
*/
struct sync_fence* display_buffer(struct dma_buf *buf,
struct sync_fence *fence);

View File

@@ -0,0 +1,60 @@
CPU cooling APIs How To
===================================
Written by Amit Daniel Kachhap <amit.kachhap@linaro.org>
Updated: 12 May 2012
Copyright (c) 2012 Samsung Electronics Co., Ltd(http://www.samsung.com)
0. Introduction
The generic cpu cooling(freq clipping, cpuhotplug etc) provides
registration/unregistration APIs to the caller. The binding of the cooling
devices to the trip point is left for the user. The registration APIs returns
the cooling device pointer.
1. cpu cooling APIs
1.1 cpufreq registration/unregistration APIs
1.1.1 struct thermal_cooling_device *cpufreq_cooling_register(
struct freq_clip_table *tab_ptr, unsigned int tab_size)
This interface function registers the cpufreq cooling device with the name
"thermal-cpufreq-%x". This api can support multiple instances of cpufreq
cooling devices.
tab_ptr: The table containing the maximum value of frequency to be clipped
for each cooling state.
.freq_clip_max: Value of frequency to be clipped for each allowed
cpus.
.temp_level: Temperature level at which the frequency clamping will
happen.
.mask_val: cpumask of the allowed cpu's
tab_size: the total number of cpufreq cooling states.
1.1.2 void cpufreq_cooling_unregister(struct thermal_cooling_device *cdev)
This interface function unregisters the "thermal-cpufreq-%x" cooling device.
cdev: Cooling device pointer which has to be unregistered.
1.2 CPU cooling action notifier register/unregister interface
1.2.1 int cputherm_register_notifier(struct notifier_block *nb,
unsigned int list)
This interface registers a driver with cpu cooling layer. The driver will
be notified when any cpu cooling action is called.
nb: notifier function to register
list: CPUFREQ_COOLING_START or CPUFREQ_COOLING_STOP
1.2.2 int cputherm_unregister_notifier(struct notifier_block *nb,
unsigned int list)
This interface registers a driver with cpu cooling layer. The driver will
be notified when any cpu cooling action is called.
nb: notifier function to register
list: CPUFREQ_COOLING_START or CPUFREQ_COOLING_STOP

View File

@@ -46,36 +46,7 @@ The threshold levels are defined as follows:
The threshold and each trigger_level are set
through the corresponding registers.
When an interrupt occurs, this driver notify user space of
one of four threshold levels for the interrupt
through kobject_uevent_env and sysfs_notify functions.
When an interrupt occurs, this driver notify kernel thermal framework
with the function exynos4_report_trigger.
Although an interrupt condition for level_0 can be set,
it is not notified to user space through sysfs_notify function.
Sysfs Interface
---------------
name name of the temperature sensor
RO
temp1_input temperature
RO
temp1_max temperature for level_1 interrupt
RO
temp1_crit temperature for level_2 interrupt
RO
temp1_emergency temperature for level_3 interrupt
RO
temp1_max_alarm alarm for level_1 interrupt
RO
temp1_crit_alarm
alarm for level_2 interrupt
RO
temp1_emergency_alarm
alarm for level_3 interrupt
RO
it can be used to synchronize the cooling action.

View File

@@ -32,7 +32,8 @@ temperature) and throttle appropriate devices.
1.1 thermal zone device interface
1.1.1 struct thermal_zone_device *thermal_zone_device_register(char *name,
int trips, void *devdata, struct thermal_zone_device_ops *ops)
int trips, int mask, void *devdata,
struct thermal_zone_device_ops *ops)
This interface function adds a new thermal zone device (sensor) to
/sys/class/thermal folder as thermal_zone[0-*]. It tries to bind all the
@@ -40,6 +41,7 @@ temperature) and throttle appropriate devices.
name: the thermal zone name.
trips: the total number of trip points this thermal zone supports.
mask: Bit string: If 'n'th bit is set, then trip point 'n' is writeable.
devdata: device private data
ops: thermal zone device call-backs.
.bind: bind the thermal zone device with a thermal cooling device.

View File

@@ -1 +0,0 @@
v4lgrab

View File

@@ -1,2 +0,0 @@
page-types
slabinfo

View File

@@ -1,2 +0,0 @@
watchdog-simple
watchdog-test

View File

@@ -6855,6 +6855,13 @@ S: Maintained
F: Documentation/filesystems/ufs.txt
F: fs/ufs/
UHID USERSPACE HID IO DRIVER:
M: David Herrmann <dh.herrmann@googlemail.com>
L: linux-input@vger.kernel.org
S: Maintained
F: drivers/hid/uhid.c
F: include/linux/uhid.h
ULTRA-WIDEBAND (UWB) SUBSYSTEM:
L: linux-usb@vger.kernel.org
S: Orphan

2
arch/.gitignore vendored
View File

@@ -1,2 +0,0 @@
i386
x86_64

View File

@@ -1 +0,0 @@
vmlinux.lds

View File

@@ -880,8 +880,6 @@ config ARCH_S5PV210
config ARCH_EXYNOS
bool "SAMSUNG EXYNOS"
select CPU_V7
select ARCH_SPARSEMEM_ENABLE
select ARCH_HAS_HOLES_MEMORYMODEL
select GENERIC_GPIO
select HAVE_CLK
select CLKDEV_LOOKUP
@@ -1155,7 +1153,7 @@ source arch/arm/mm/Kconfig
config ARM_NR_BANKS
int
default 16 if ARCH_EP93XX
default 16 if ARCH_EP93XX || ARCH_EXYNOS
default 8
config IWMMXT
@@ -1405,6 +1403,43 @@ config PL310_ERRATA_769419
on systems with an outer cache, the store buffer is drained
explicitly.
config ARM_ERRATA_761320
bool "ARM errata: no direct eviction"
depends on CPU_V7 && SMP
help
This option enables the workaround for the 761320 Cortex-A9 erratum.
config ARM_ERRATA_766421
bool "ARM errata: Strongly-Ordered/Device load or NC LDREX could return incorrect data"
depends on CPU_V7
help
This option enables the workaround for the 766421 Cortex-A15 erratum.
In certain situations, a strongly ordered or device load instruction,
or a non-cacheable normal memory load-exclusive instruction could
match multiple fill buffers and return incorrect data.
This workaround is add DMB instruction when making any change to the
translation regime and before doing any new loads/stores/preloads
in the new translation regime.
config ARM_ERRATA_773022
bool "ARM errata: incorrect instructions may be executed from loop buffer"
depends on CPU_V7
help
This option enables the workaround for the 773022 Cortex-A15 erratum.
In certain rare sequences of code, the loop buffer may deliver
incorrect instructions.
This workaround is to disable loop buffer.
config ARM_ERRATA_774769
bool "ARM errata: data corruption may occur with store streaming in a system"
depends on CPU_V7
help
This option enables the workaround for the erratum 774769.
External memory may be corrupted on erratum 774769.
The workaround is to configure write streaming on versions of A15
affected by this erratum such that no streaming-write ever allocates
into the L2 cache.
endmenu
source "arch/arm/common/Kconfig"
@@ -1559,6 +1594,24 @@ config HAVE_ARM_TWD
help
This options enables support for the ARM timer and watchdog unit
config BL_SWITCHER
bool "big.LITTLE switcher support (experimental)"
depends on CPU_V7 && EXPERIMENTAL
select CPU_PM
select ARM_CPU_SUSPEND
help
The big.LITTLE "switcher" provides the core functionality to
transparently handle transition between a cluster of A15's
and a cluster of A7's in a big.LITTLE system.
config BL_SWITCHER_DUMMY_IF
bool "Simple big.LITTLE switcher user interface"
depends on BL_SWITCHER
help
This is a simple and dummy char dev interface to control
the big.LITTLE switcher core code. It is meant for
debugging purposes only.
choice
prompt "Memory split"
default VMSPLIT_3G
@@ -1622,7 +1675,7 @@ source kernel/Kconfig.preempt
config HZ
int
default 200 if ARCH_EBSA110 || ARCH_S3C24XX || ARCH_S5P64X0 || \
ARCH_S5PV210 || ARCH_EXYNOS4
ARCH_S5PV210 || ARCH_EXYNOS
default OMAP_32K_TIMER_HZ if ARCH_OMAP && OMAP_32K_TIMER
default AT91_TIMER_HZ if ARCH_AT91
default SHMOBILE_TIMER_HZ if ARCH_SHMOBILE
@@ -1722,6 +1775,16 @@ config ARCH_SELECT_MEMORY_MODEL
config HAVE_ARCH_PFN_VALID
def_bool ARCH_HAS_HOLES_MEMORYMODEL || !SPARSEMEM
config ARCH_SKIP_SECONDARY_CALIBRATE
bool "Skip secondary CPU calibration"
depends on SMP
help
On some architectures, secondary cores shares clock with primiary
core and hence scale together. Hence secondary core lpj calibration
is not necessary and can be skipped to save considerable time.
If unsure, say n.
config HIGHMEM
bool "High Memory Support"
depends on MMU
@@ -1885,6 +1948,15 @@ config DEPRECATED_PARAM_STRUCT
This was deprecated in 2001 and announced to live on for 5 years.
Some old boot loaders still use this way.
config ARM_FLUSH_CONSOLE_ON_RESTART
bool "Force flush the console on restart"
help
If the console is locked while the system is rebooted, the messages
in the temporary logbuffer would not have propogated to all the
console drivers. This option forces the console lock to be
released if it failed to be acquired, which will cause all the
pending messages to be flushed.
endmenu
menu "Boot options"

View File

@@ -63,6 +63,27 @@ config DEBUG_USER
8 - SIGSEGV faults
16 - SIGBUS faults
config DEBUG_RODATA
bool "Write protect kernel text section"
default n
depends on DEBUG_KERNEL && MMU
---help---
Mark the kernel text section as write-protected in the pagetables,
in order to catch accidental (and incorrect) writes to such const
data. This will cause the size of the kernel, plus up to 4MB, to
be mapped as pages instead of sections, which will increase TLB
pressure.
If in doubt, say "N".
config DEBUG_RODATA_TEST
bool "Testcase for the DEBUG_RODATA feature"
depends on DEBUG_RODATA
default n
---help---
This option enables a testcase for the DEBUG_RODATA
feature.
If in doubt, say "N"
# These options are only for real kernel hackers who want to get their hands dirty.
config DEBUG_LL
bool "Kernel low-level debugging functions (read help!)"

View File

@@ -1,6 +0,0 @@
Image
zImage
xipImage
bootpImage
uImage
*.dtb

View File

@@ -1,18 +0,0 @@
ashldi3.S
font.c
lib1funcs.S
piggy.gzip
piggy.lzo
piggy.lzma
piggy.xzkern
vmlinux
vmlinux.lds
# borrowed libfdt files
fdt.c
fdt.h
fdt_ro.c
fdt_rw.c
fdt_wip.c
libfdt.h
libfdt_internal.h

View File

@@ -766,6 +766,8 @@ proc_types:
@ b __arm6_mmu_cache_off
@ b __armv3_mmu_cache_flush
#if !defined(CONFIG_CPU_V7)
/* This collides with some V7 IDs, preventing correct detection */
.word 0x00000000 @ old ARM ID
.word 0x0000f000
mov pc, lr
@@ -774,6 +776,7 @@ proc_types:
THUMB( nop )
mov pc, lr
THUMB( nop )
#endif
.word 0x41007000 @ ARM7/710
.word 0xfff8fe00

View File

@@ -0,0 +1,58 @@
/*
* SAMSUNG SMDK5410 board device tree source
*
* Copyright (c) 2012 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
/dts-v1/;
/include/ "exynos5410.dtsi"
/ {
model = "SAMSUNG SMDK5410 board based on EXYNOS5410";
compatible = "samsung,smdk5410", "samsung,exynos5410";
memory {
reg = <0x40000000 0x80000000>;
};
chosen {
bootargs = "root=/dev/ram0 rw ramdisk=8192 initrd=0x41000000,8M console=ttySAC2,115200 init=/linuxrc mem=512M";
};
i2c@12C60000 {
status = "disabled";
};
i2c@12C70000 {
status = "disabled";
};
i2c@12C80000 {
status = "disabled";
};
i2c@12C90000 {
status = "disabled";
};
i2c@12CA0000 {
status = "disabled";
};
i2c@12CB0000 {
status = "disabled";
};
i2c@12CC0000 {
status = "disabled";
};
i2c@12CD0000 {
status = "disabled";
};
};

View File

@@ -0,0 +1,172 @@
/*
* SAMSUNG EXYNOS5410 SoC device tree source
*
* Copyright (c) 2012 Samsung Electronics Co., Ltd.
* http://www.samsung.com
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
/include/ "skeleton.dtsi"
/ {
compatible = "samsung,exynos5410";
interrupt-parent = <&gic>;
gic:interrupt-controller@10481000 {
compatible = "arm,cortex-a9-gic";
#interrupt-cells = <3>;
interrupt-controller;
reg = <0x10481000 0x1000>, <0x10482000 0x2000>;
};
combiner:interrupt-controller@10440000 {
compatible = "samsung,exynos4210-combiner";
#interrupt-cells = <2>;
interrupt-controller;
samsung,combiner-nr = <32>;
reg = <0x10440000 0x1000>;
interrupts = <0 0 0>, <0 1 0>, <0 2 0>, <0 3 0>,
<0 4 0>, <0 5 0>, <0 6 0>, <0 7 0>,
<0 8 0>, <0 9 0>, <0 10 0>, <0 11 0>,
<0 12 0>, <0 13 0>, <0 14 0>, <0 15 0>,
<0 16 0>, <0 17 0>, <0 18 0>, <0 19 0>,
<0 20 0>, <0 21 0>, <0 22 0>, <0 23 0>,
<0 24 0>, <0 25 0>, <0 26 0>, <0 27 0>,
<0 28 0>, <0 29 0>, <0 30 0>, <0 31 0>;
};
watchdog {
compatible = "samsung,s3c2410-wdt";
reg = <0x101D0000 0x100>;
interrupts = <0 42 0>;
};
rtc {
compatible = "samsung,s3c6410-rtc";
reg = <0x101E0000 0x100>;
interrupts = <0 43 0>, <0 44 0>;
};
serial@12C00000 {
compatible = "samsung,exynos4210-uart";
reg = <0x12C00000 0x100>;
interrupts = <0 51 0>;
};
serial@12C10000 {
compatible = "samsung,exynos4210-uart";
reg = <0x12C10000 0x100>;
interrupts = <0 52 0>;
};
serial@12C20000 {
compatible = "samsung,exynos4210-uart";
reg = <0x12C20000 0x100>;
interrupts = <0 53 0>;
};
serial@12C30000 {
compatible = "samsung,exynos4210-uart";
reg = <0x12C30000 0x100>;
interrupts = <0 54 0>;
};
i2c@12C60000 {
compatible = "samsung,s3c2440-i2c";
reg = <0x12C60000 0x100>;
interrupts = <0 56 0>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c@12C70000 {
compatible = "samsung,s3c2440-i2c";
reg = <0x12C70000 0x100>;
interrupts = <0 57 0>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c@12C80000 {
compatible = "samsung,s3c2440-i2c";
reg = <0x12C80000 0x100>;
interrupts = <0 58 0>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c@12C90000 {
compatible = "samsung,s3c2440-i2c";
reg = <0x12C90000 0x100>;
interrupts = <0 59 0>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c@12CA0000 {
compatible = "samsung,s3c2440-i2c";
reg = <0x12CA0000 0x100>;
interrupts = <0 60 0>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c@12CB0000 {
compatible = "samsung,s3c2440-i2c";
reg = <0x12CB0000 0x100>;
interrupts = <0 61 0>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c@12CC0000 {
compatible = "samsung,s3c2440-i2c";
reg = <0x12CC0000 0x100>;
interrupts = <0 62 0>;
#address-cells = <1>;
#size-cells = <0>;
};
i2c@12CD0000 {
compatible = "samsung,s3c2440-i2c";
reg = <0x12CD0000 0x100>;
interrupts = <0 63 0>;
#address-cells = <1>;
#size-cells = <0>;
};
amba {
#address-cells = <1>;
#size-cells = <1>;
compatible = "arm,amba-bus";
interrupt-parent = <&gic>;
ranges;
pdma0: pdma@121A0000 {
compatible = "arm,pl330", "arm,primecell";
reg = <0x121A0000 0x1000>;
interrupts = <0 34 0>;
};
pdma1: pdma@121B0000 {
compatible = "arm,pl330", "arm,primecell";
reg = <0x121B0000 0x1000>;
interrupts = <0 35 0>;
};
mdma0: mdma@10800000 {
compatible = "arm,pl330", "arm,primecell";
reg = <0x10800000 0x1000>;
interrupts = <0 33 0>;
};
mdma1: mdma@11C10000 {
compatible = "arm,pl330", "arm,primecell";
reg = <0x11C10000 0x1000>;
interrupts = <0 124 0>;
};
};
};

View File

@@ -40,3 +40,53 @@ config SHARP_PARAM
config SHARP_SCOOP
bool
config FIQ_GLUE
bool
select FIQ
config FIQ_DEBUGGER
bool "FIQ Mode Serial Debugger"
select FIQ
select FIQ_GLUE
default n
help
The FIQ serial debugger can accept commands even when the
kernel is unresponsive due to being stuck with interrupts
disabled.
config FIQ_DEBUGGER_NO_SLEEP
bool "Keep serial debugger active"
depends on FIQ_DEBUGGER
default n
help
Enables the serial debugger at boot. Passing
fiq_debugger.no_sleep on the kernel commandline will
override this config option.
config FIQ_DEBUGGER_WAKEUP_IRQ_ALWAYS_ON
bool "Don't disable wakeup IRQ when debugger is active"
depends on FIQ_DEBUGGER
default n
help
Don't disable the wakeup irq when enabling the uart clock. This will
cause extra interrupts, but it makes the serial debugger usable with
on some MSM radio builds that ignore the uart clock request in power
collapse.
config FIQ_DEBUGGER_CONSOLE
bool "Console on FIQ Serial Debugger port"
depends on FIQ_DEBUGGER
default n
help
Enables a console so that printk messages are displayed on
the debugger serial port as the occur.
config FIQ_DEBUGGER_CONSOLE_DEFAULT_ENABLE
bool "Put the FIQ debugger into console mode by default"
depends on FIQ_DEBUGGER_CONSOLE
default n
help
If enabled, this puts the fiq debugger into console mode by default.
Otherwise, the fiq debugger will start out in debug mode.

View File

@@ -15,3 +15,8 @@ obj-$(CONFIG_ARCH_IXP2000) += uengine.o
obj-$(CONFIG_ARCH_IXP23XX) += uengine.o
obj-$(CONFIG_PCI_HOST_ITE8152) += it8152.o
obj-$(CONFIG_ARM_TIMER_SP804) += timer-sp.o
obj-$(CONFIG_BL_SWITCHER) += bL_head.o bL_entry.o
obj-$(CONFIG_BL_SWITCHER) += bL_switcher.o
obj-$(CONFIG_BL_SWITCHER) += bL_vlock.o
obj-$(CONFIG_FIQ_GLUE) += fiq_glue.o fiq_glue_setup.o
obj-$(CONFIG_FIQ_DEBUGGER) += fiq_debugger.o

284
arch/arm/common/bL_entry.c Normal file
View File

@@ -0,0 +1,284 @@
/*
* arch/arm/common/bL_entry.c -- big.LITTLE kernel re-entry point
*
* Created by: Nicolas Pitre, March 2012
* Copyright: (C) 2012 Linaro Limited
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/io.h>
#include <linux/ioport.h>
#include <linux/vmalloc.h>
#include <linux/slab.h>
#include <asm/bL_entry.h>
#include <asm/barrier.h>
#include <asm/proc-fns.h>
#include <asm/cacheflush.h>
#include <asm/memblock.h>
extern volatile unsigned long bL_entry_vectors[BL_NR_CLUSTERS][BL_CPUS_PER_CLUSTER];
void bL_set_entry_vector(unsigned cpu, unsigned cluster, void *ptr)
{
unsigned long val = ptr ? virt_to_phys(ptr) : 0;
bL_entry_vectors[cluster][cpu] = val;
smp_wmb();
__cpuc_flush_dcache_area((void *)&bL_entry_vectors[cluster][cpu], 4);
outer_clean_range(__pa(&bL_entry_vectors[cluster][cpu]),
__pa(&bL_entry_vectors[cluster][cpu + 1]));
}
unsigned long bL_sync_phys;
struct bL_sync_struct *bL_sync;
/*
* __bL_cpu_going_down: Indicates that the cpu is being torn down
* This must be called at the point of committing to teardown of a CPU.
*/
void __bL_cpu_going_down(unsigned int cpu, unsigned int cluster)
{
writeb_relaxed(CPU_GOING_DOWN, &bL_sync->clusters[cluster].cpus[cpu]);
dsb();
}
/*
* __bL_cpu_down: Indicates that cpu teardown is complete and that the
* cluster can be torn down without disrupting this CPU.
* To avoid deadlocks, this must be called before a CPU is powered down.
*/
void __bL_cpu_down(unsigned int cpu, unsigned int cluster)
{
dsb();
writeb_relaxed(CPU_DOWN, &bL_sync->clusters[cluster].cpus[cpu]);
dsb_sev();
}
/*
* __bL_outbound_leave_critical: Leave the cluster teardown critical section.
* @state: the final state of the cluster:
* CLUSTER_UP: no destructive teardown was done and the cluster has been
* restored to the previous state; or
* CLUSTER_DOWN: the cluster has been torn-down, ready for power-off.
*/
void __bL_outbound_leave_critical(unsigned int cluster, int state)
{
dsb();
writeb_relaxed(state, &bL_sync->clusters[cluster].cluster);
dsb_sev();
}
/*
* __bL_outbound_enter_critical: Enter the cluster teardown critical section.
* This function should be called by the last man, after local CPU teardown
* is complete.
*/
bool __bL_outbound_enter_critical(unsigned int cpu, unsigned int cluster)
{
unsigned int i;
struct bL_cluster_sync_struct *c = &bL_sync->clusters[cluster];
/* Warn inbound CPUs that the cluster is being torn down: */
writeb_relaxed(CLUSTER_GOING_DOWN, &c->cluster);
dsb();
/* Back out if the inbound cluster is already in the critical region: */
if (readb_relaxed(&c->inbound) == INBOUND_COMING_UP)
goto abort;
/*
* Wait for all CPUs to get out of the GOING_DOWN state, so that local
* teardown is complete on each CPU before tearing down the cluster.
*
* If any CPU has been woken up again from the DOWN state, then we
* shouldn't be taking the cluster down at all: abort in that case.
*/
for (i = 0; i < BL_CPUS_PER_CLUSTER; i++) {
int cpustate;
if (i == cpu)
continue;
while (1) {
cpustate = readb_relaxed(&c->cpus[i]);
if (cpustate != CPU_GOING_DOWN)
break;
wfe();
}
switch (cpustate) {
case CPU_DOWN:
continue;
default:
goto abort;
}
}
dsb();
return true;
abort:
__bL_outbound_leave_critical(cluster, CLUSTER_UP);
return false;
}
bool __bL_cluster_state(unsigned int cluster)
{
return readb_relaxed(&bL_sync->clusters[cluster].cluster);
}
/*
* bL_running_cluster_num_cpus: Return the cluster number of running cpu
*/
unsigned int bL_running_cluster_num_cpus(unsigned int cpu)
{
unsigned int cluster = 0;
unsigned int cpustate;
cpustate = readb_relaxed(&bL_sync->clusters[cluster].cpus[cpu]);
if (cpustate == CPU_DOWN)
cluster = 1;
pr_debug("cpu %d running cluster : %d\n", cpu, cluster);
return cluster;
}
void bL_update_cluster_state(unsigned int value, unsigned int cluster)
{
if (value < CLUSTER_DOWN || value > CLUSTER_GOING_DOWN)
return;
writeb_relaxed(value, &bL_sync->clusters[cluster].cluster);
}
void bL_update_cpu_state(unsigned int value, unsigned int cpu,
unsigned int cluster)
{
if (value < CPU_DOWN || value > CPU_GOING_DOWN)
return;
writeb_relaxed(value, &bL_sync->clusters[cluster].cpus[cpu]);
}
extern unsigned long bL_power_up_setup_phys;
int __init bL_cluster_sync_reserve(void)
{
struct page *page;
void *virt;
page = alloc_page(GFP_KERNEL);
bL_sync_phys = page_to_phys(page);
virt = vmap(&page, 1, VM_MAP,
pgprot_writecombine(PAGE_KERNEL));
bL_sync = virt;
return 0;
}
static struct resource bL_iomem_resource = {
.name = "big.LITTLE cluster synchronisation buffer",
.flags = IORESOURCE_MEM|IORESOURCE_EXCLUSIVE|IORESOURCE_BUSY,
};
unsigned long bL_vlock_phys;
struct bL_firstman_vlock_struct *bL_vlock;
int __init bL_vlock_reserve(void)
{
struct page *page;
void *virt;
page = alloc_page(GFP_KERNEL);
bL_vlock_phys = page_to_phys(page);
virt = vmap(&page, 1, VM_MAP, pgprot_writecombine(PAGE_KERNEL));
bL_vlock = virt;
return 0;
}
static struct resource bL_vlock_resource = {
.name = "big.LITTLE voting lock buffer",
.flags = IORESOURCE_MEM|IORESOURCE_EXCLUSIVE|IORESOURCE_BUSY,
};
int __init bL_cluster_sync_init(const struct bL_power_ops *ops)
{
unsigned int i, mpidr, this_cluster;
/*
* It is too late to steal physical memory here.
* Boards must pre-reserve synchronisation memory by calling
* bL_cluster_sync_reserve() from their machine_desc .reserve hook.
*/
bL_cluster_sync_reserve();
BUG_ON(bL_sync_phys == 0);
if (!bL_sync) {
pr_err("big.LITTLE synchronisation buffer mapping failed\nm");
return -ENOMEM;
}
bL_iomem_resource.start = bL_sync_phys;
bL_iomem_resource.end = bL_sync_phys + BL_SYNC_MEM_RESERVE - 1;
insert_resource(&iomem_resource, &bL_iomem_resource);
bL_vlock_reserve();
BUG_ON(bL_vlock_phys == 0);
if (!bL_vlock) {
pr_err("big.LITTLE voting lock buffer mapping failed\n");
return -ENOMEM;
}
bL_vlock_resource.start = bL_vlock_phys;
bL_vlock_resource.end = bL_vlock_phys + BL_VLOCK_MEM_RESERVE -1;
insert_resource(&iomem_resource, &bL_vlock_resource);
/*
* Set initial CPU and cluster states.
* Only one cluster is assumed to be active at this point.
*/
asm ("mrc\tp15, 0, %0, c0, c0, 5" : "=r" (mpidr));
this_cluster = (mpidr >> 8) & 0xf;
memset(bL_sync, 0, sizeof *bL_sync);
for_each_online_cpu(i)
bL_sync->clusters[this_cluster].cpus[i] = CPU_UP;
bL_sync->clusters[this_cluster].cluster = CLUSTER_UP;
if (ops->power_up_setup) {
bL_power_up_setup_phys =
virt_to_phys(ops->power_up_setup);
__cpuc_flush_dcache_area((void *)&bL_power_up_setup_phys,
sizeof bL_power_up_setup_phys);
outer_clean_range(__pa(&bL_power_up_setup_phys),
__pa(&bL_power_up_setup_phys + 1));
}
__cpuc_flush_dcache_area((void *)&bL_sync_phys,
sizeof bL_sync_phys);
outer_clean_range(__pa(&bL_sync_phys), __pa(&bL_sync_phys + 1));
/*
* For electing firstman, voting lock structure is initialized.
* The values of voting onwer & offset is clear.
*/
memset(bL_vlock, 0, sizeof *bL_vlock);
for (i = 0; i < BL_NR_CLUSTERS; i++) {
int j;
bL_vlock->clusters[i].voting_owner = 0;
for_each_online_cpu(j)
bL_vlock->clusters[i].voting_offset[j] = 0;
}
__cpuc_flush_dcache_area((void *)&bL_vlock_phys, sizeof bL_vlock_phys);
outer_clean_range(__pa(&bL_sync_phys), __pa(&bL_sync_phys +1));
return 0;
}

195
arch/arm/common/bL_head.S Normal file
View File

@@ -0,0 +1,195 @@
/*
* arch/arm/common/bL_head.S -- big.LITTLE kernel re-entry point
*
* Created by: Nicolas Pitre, March 2012
* Copyright: (C) 2012 Linaro Limited
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/linkage.h>
#include <asm/bL_entry.h>
#include <asm/bL_vlock.h>
#include <asm/asm-offsets.h>
.if BL_SYNC_CLUSTER_CPUS
.error "cpus must be the first member of struct bL_cluster_sync_struct"
.endif
.macro pr_dbg cpu, string
#if defined(CONFIG_DEBUG_LL) && defined(DEBUG)
b 1901f
1902: .ascii "CPU 0: \0CPU 1: \0CPU 2: \0CPU 3: \0"
.ascii "CPU 4: \0CPU 5: \0CPU 6: \0CPU 7: \0"
1903: .asciz "\string"
.align
1901: adr r0, 1902b
add r0, r0, \cpu, lsl #3
bl printascii
adr r0, 1903b
bl printascii
#endif
.endm
.arm
ENTRY(bl_entry_point)
mrc p15, 0, r0, c0, c0, 5
ubfx r9, r0, #0, #4 @ r9 = cpu
ubfx r10, r0, #8, #4 @ r10 = cluster
mov r3, #BL_CPUS_PER_CLUSTER
mla r4, r3, r10, r9 @ r4 = canonical CPU index
cmp r4, #(BL_CPUS_PER_CLUSTER * BL_NR_CLUSTERS)
blo 2f
/* We didn't expect this CPU. Try to make it quiet. */
1: wfi
wfe
b 1b
2: pr_dbg r4, "kernel bl_entry_point\n"
/*
* MMU is off so we need to get to bL_entry_vectors in a
* position independent way.
*/
adr r5, 3f
ldr r7, 4f
ldr r8, 5f
ldr r11, 6f
ldr r6, [r5]
ldr r7, [r5, r7] @ r7 = bL_power_up_setup_phys
ldr r8, [r5, r8] @ r8 = bL_sync_phys
ldr r11, [r5, r11] @ r11 = first_man_locks
add r5, r5, r6 @ r5 = bL_entry_vectors
bL_entry_gated:
ldr r6, [r5, r4, lsl #2]
cmp r6, #0
/* wfeeq */
beq bL_entry_gated
pr_dbg r4, "released\n"
@ no longer used
@ r6 = CPU entry vector
mov r0, #BL_SYNC_CLUSTER_SIZE
mla r8, r0, r10, r8 @ r8 = bL_sync cluster base
@ Signal that this CPU is coming UP:
mov r0, #CPU_COMING_UP
strb r0, [r8, r9]
dsb
@ At this point, the cluster cannot unexpectedly enter the GOING_DOWN
@ state, because there is at least one active CPU (this CPU).
mov r0, #BL_VLOCK_STRUCT_SIZE
mla r11, r0, r10, r11 @ r11 = cluster first man lock
mov r0, r11
mov r1, r9 @ cpu
bl vlock_trylock
cmp r0, #0 @ failed to get the lock?
bne cluster_setup_wait @ wait for cluster setup if so
ldrb r0, [r8, #BL_SYNC_CLUSTER_CLUSTER]
cmp r0, #CLUSTER_UP @ cluster already up?
bne cluster_setup @ if not, set up the cluster
@ Otherwise, release the first man lock and skip setup:
mov r0, r11
bl vlock_unlock
b cluster_setup_complete
cluster_setup:
@ Signal that the cluster is being brought up:
mov r0, #INBOUND_COMING_UP
strb r0, [r8, #BL_SYNC_CLUSTER_INBOUND]
dsb
@ Any CPU trying to take the cluster into CLUSTER_GOING_DOWN from this
@ point onwards will observe INBOUND_COMING_UP and abort.
@ Wait for any previously-pending cluster teardown operations to abort
@ or complete:
cluster_teardown_wait:
ldrb r0, [r8, #BL_SYNC_CLUSTER_CLUSTER]
cmp r0, #CLUSTER_GOING_DOWN
bne first_man_setup
wfe
b cluster_teardown_wait
first_man_setup:
@ If the outbound gave up before teardown started, skip cluster setup:
cmp r0, #CLUSTER_UP
beq cluster_setup_complete
@ power_up_setup is now responsible for setting up the cluster:
cmp r7, #0
blxne r7 @ Call power_up_setup if defined
@ Leave the cluster setup critical section:
dsb
mov r0, #INBOUND_NOT_COMING_UP
strb r0, [r8, #BL_SYNC_CLUSTER_INBOUND]
mov r0, #CLUSTER_UP
strb r0, [r8, #BL_SYNC_CLUSTER_CLUSTER]
dsb
sev
mov r0, r11
bl vlock_unlock
b cluster_setup_complete
@ In the contended case, non-first men wait here for cluster setup
@ to complete:
cluster_setup_wait:
ldrb r0, [r8, #BL_SYNC_CLUSTER_CLUSTER]
cmp r0, #CLUSTER_UP
wfene
bne cluster_setup_wait
cluster_setup_complete:
@ If a platform-specific CPU setup hook is needed, it should be
@ called from here.
@ Mark the CPU as up:
dsb
mov r0, #CPU_UP
strb r0, [r8, r9]
dsb
sev
bx r6
3: .word bL_entry_vectors - .
4: .word bL_power_up_setup_phys - 3b
5: .word bL_sync_phys - 3b
6: .word bL_vlock_phys - 3b
ENDPROC(bl_entry_point)
.bss
@ Magic to size and align the first-man vlock structures
@ so that each does not cross a 1KB boundary:
.align 5
.type bL_entry_vectors, #object
ENTRY(bL_entry_vectors)
.space 4 * BL_NR_CLUSTERS * BL_CPUS_PER_CLUSTER
.type bL_power_up_setup_phys, #object
ENTRY(bL_power_up_setup_phys)
.word 0 @ set by bL_switcher_init()

View File

@@ -0,0 +1,653 @@
/*
* arch/arm/common/bL_switcher.c -- big.LITTLE cluster switcher core driver
*
* Created by: Nicolas Pitre, March 2012
* Copyright: (C) 2012 Linaro Limited
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* TODO:
*
* - Allow the outbound CPU to remain online for the inbound CPU to snoop its
* cache for a while.
* - Perform a switch during initialization to probe what the counterpart
* CPU's GIC interface ID is and stop hardcoding them in the code.
* - Local timers migration (they are not supported at the moment).
*/
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/interrupt.h>
#include <linux/cpu_pm.h>
#include <linux/kthread.h>
#include <linux/wait.h>
#include <linux/cpu.h>
#include <linux/mm.h>
#include <linux/string.h>
#include <linux/spinlock.h>
#include <asm/suspend.h>
#include <asm/hardware/gic.h>
#include <asm/bL_switcher.h>
#include <asm/bL_entry.h>
/*
* Notifier list for kernel code which want to called at switch.
* This is used to stop a switch. If some driver want to keep doing
* some work without switch, the driver registers the notifier and
* the notifier callback deal with refusing a switch and some work.
*/
ATOMIC_NOTIFIER_HEAD(bL_switcher_notifier_list);
int register_bL_swicher_notifier(struct notifier_block *nb)
{
return atomic_notifier_chain_register(&bL_switcher_notifier_list, nb);
}
int unregister_bL_swicher_notifier(struct notifier_block *nb)
{
return atomic_notifier_chain_unregister(&bL_switcher_notifier_list, nb);
}
/*
* Before migrating cpu, swicher core driver ask to some dirver
* whether carries out a switch or not.
*
* Switcher core driver decides to do a switch through return value
* (-) minus value : refuse a switch
* (+) plus value : go on a switch
*/
static int bL_enter_migration(void)
{
return atomic_notifier_call_chain(&bL_switcher_notifier_list, SWITCH_ENTER, NULL);
}
static int bL_exit_migration(void)
{
return atomic_notifier_call_chain(&bL_switcher_notifier_list, SWITCH_EXIT, NULL);
}
/*
* Use our own MPIDR accessors as the generic ones in asm/cputype.h have
* __attribute_const__ and we don't want the compiler to assume any
* constness here.
*/
static int read_mpidr(void)
{
unsigned int id;
asm volatile ("mrc\tp15, 0, %0, c0, c0, 5" : "=r" (id));
return id;
}
/*
* bL switcher core code.
*/
const struct bL_power_ops *bL_platform_ops;
extern void setup_mm_for_reboot(void);
typedef void (*phys_reset_t)(unsigned long);
static void bL_do_switch(void *_unused)
{
unsigned mpidr, cpuid, clusterid, ob_cluster, ib_cluster;
phys_reset_t phys_reset;
pr_debug("%s\n", __func__);
mpidr = read_mpidr();
cpuid = mpidr & 0xf;
clusterid = (mpidr >> 8) & 0xf;
ob_cluster = clusterid;
ib_cluster = clusterid ^ 1;
/*
* Our state has been saved at this point. Let's release our
* inbound CPU.
*/
bL_set_entry_vector(cpuid, ib_cluster, cpu_resume);
sev();
/*
* From this point, we must assume that our counterpart CPU might
* have taken over in its parallel world already, as if execution
* just returned from cpu_suspend(). It is therefore important to
* be very careful not to make any change the other guy is not
* expecting. This is why we need stack isolation.
*
* Also, because of this special stack, we cannot rely on anything
* that expects a valid 'current' pointer. For example, printk()
* may give bogus "BUG: recent printk recursion!\n" messages
* because of that.
*/
bL_platform_ops->power_down(cpuid, ob_cluster);
/*
* Hey, we're not dead! This means a request to switch back
* has come from our counterpart and reset was deasserted before
* we had the chance to enter WFI. Let's turn off the MMU and
* branch back directly through our kernel entry point.
*/
setup_mm_for_reboot();
phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset);
phys_reset(virt_to_phys(bl_entry_point));
/* should never get here */
BUG();
}
/*
* Stack isolation (size needs to be optimized)
*/
static unsigned long __attribute__((__aligned__(L1_CACHE_BYTES)))
stacks[BL_CPUS_PER_CLUSTER][BL_NR_CLUSTERS][128];
extern void call_with_stack(void (*fn)(void *), void *arg, void *sp);
static int bL_switchpoint(unsigned long _unused)
{
unsigned int mpidr = read_mpidr();
unsigned int cpuid = mpidr & 0xf;
unsigned int clusterid = (mpidr >> 8) & 0xf;
void *stack = stacks[cpuid][clusterid] + ARRAY_SIZE(stacks[0][0]);
call_with_stack(bL_do_switch, NULL, stack);
BUG();
/*
* For removing warning message of compiler, the statement of return
* is added, but this return is nothing to this function.
*/
return 0;
}
/*
* Generic switcher interface
*/
static DEFINE_SPINLOCK(switch_gic_lock);
/*
* bL_switch_to - Switch to a specific cluster for the current CPU
* @new_cluster_id: the ID of the cluster to switch to.
*
* This function must be called on the CPU to be switched.
* Returns 0 on success, else a negative status code.
*/
static int bL_switch_to(unsigned int new_cluster_id)
{
unsigned int mpidr, cpuid, clusterid, ob_cluster, ib_cluster;
int ret = 0;
mpidr = read_mpidr();
cpuid = mpidr & 0xf;
clusterid = (mpidr >> 8) & 0xf;
ob_cluster = clusterid;
ib_cluster = clusterid ^ 1;
if (new_cluster_id == clusterid)
return 0;
if (!bL_platform_ops)
return -ENOSYS;
pr_debug("before switch: CPU %d in cluster %d\n", cpuid, clusterid);
/* Close the gate for our entry vectors */
bL_set_entry_vector(cpuid, ob_cluster, NULL);
bL_set_entry_vector(cpuid, ib_cluster, NULL);
/*
* From this point we are entering the switch critical zone
* and can't sleep/schedule anymore.
*/
local_irq_disable();
local_fiq_disable();
/*
* Get spin_lock to protect concurrent accesses of GIC registers
* from both NWd(gic_migrate_target) and SWd(SMC of bL_power_up).
*/
spin_lock(&switch_gic_lock);
/*
* Let's wake up the inbound CPU now in case it requires some delay
* to come online, but leave it gated in our entry vector code.
*/
bL_platform_ops->power_up(cpuid, ib_cluster);
/* redirect GIC's SGIs to our counterpart */
gic_migrate_target(cpuid + ib_cluster*4);
/*
* Raise a SGI on the inbound CPU to make sure it doesn't stall
* in a possible WFI, such as the one in bL_do_switch().
*/
arm_send_ping_ipi(smp_processor_id());
spin_unlock(&switch_gic_lock);
ret = cpu_pm_enter();
if (ret)
goto out;
/* Let's do the actual CPU switch. */
ret = cpu_suspend((unsigned long)NULL, bL_switchpoint);
if (ret > 0)
ret = -EINVAL;
/* We are executing on the inbound CPU at this point */
mpidr = read_mpidr();
cpuid = mpidr & 0xf;
clusterid = (mpidr >> 8) & 0xf;
pr_debug("after switch: CPU %d in cluster %d\n", cpuid, clusterid);
BUG_ON(clusterid != ib_cluster);
bL_platform_ops->inbound_setup(cpuid, !clusterid);
ret = cpu_pm_exit();
out:
local_fiq_enable();
local_irq_enable();
if (ret)
pr_err("%s exiting with error %d\n", __func__, ret);
return ret;
}
struct bL_thread {
struct task_struct *task;
wait_queue_head_t wq;
int wanted_cluster;
};
static struct bL_thread bL_threads[BL_CPUS_PER_CLUSTER];
static int switch_ready = -1;
static DEFINE_SPINLOCK(switch_ready_lock);
#define BL_TIMEOUT_NS 50000000
static int bL_switcher_thread(void *arg)
{
struct bL_thread *t = arg;
struct sched_param param = { .sched_priority = 1 };
int ret;
sched_setscheduler_nocheck(current, SCHED_FIFO, &param);
do {
ret = wait_event_interruptible(t->wq, t->wanted_cluster != -1);
if (!ret) {
int cluster = t->wanted_cluster;
#ifdef CONFIG_EXYNOS5_CCI
t->wanted_cluster = -1;
bL_switch_to(cluster);
#else
static atomic_t switch_ready_cnt = ATOMIC_INIT(0);
unsigned long long start = sched_clock();
unsigned int cpuid = get_cpu();
signed long long wait_time = 0;
atomic_inc(&switch_ready_cnt);
dmb();
spin_lock(&switch_ready_lock);
if (switch_ready < 0) {
while (atomic_read(&switch_ready_cnt) <
num_online_cpus()) {
wait_time = sched_clock() - start;
if ((wait_time > BL_TIMEOUT_NS) ||
(wait_time < 0))
break;
}
if (wait_time > BL_TIMEOUT_NS) {
switch_ready = 0;
pr_info("%s: aborted on CPU %d by timeout (%ld msecs)\n",
__func__, cpuid,
(int) wait_time / NSEC_PER_MSEC);
} else if (wait_time < 0) {
switch_ready = 0;
pr_info("%s: sched_clock is reversed\n",
__func__);
} else {
switch_ready = 1;
}
}
spin_unlock(&switch_ready_lock);
atomic_dec(&switch_ready_cnt);
t->wanted_cluster = -1;
spin_lock(&switch_ready_lock);
if (switch_ready == 1) {
spin_unlock(&switch_ready_lock);
/* condition met before timeout */
bL_switch_to(cluster);
} else {
spin_unlock(&switch_ready_lock);
}
put_cpu();
#endif
}
} while (!kthread_should_stop());
return ret;
}
static int __init bL_switcher_thread_create(unsigned int cpu, struct bL_thread *t)
{
t->task = kthread_create_on_node(bL_switcher_thread, t,
cpu_to_node(cpu),
"kswitcher_%d", cpu);
if (IS_ERR(t->task)) {
pr_err("%s failed for CPU %d\n", __func__, cpu);
return PTR_ERR(t->task);
}
kthread_bind(t->task, cpu);
init_waitqueue_head(&t->wq);
t->wanted_cluster = -1;
wake_up_process(t->task);
return 0;
}
static unsigned int switch_operation = 0x11;
/*
* bL_check_auto_switcher_enable - check whether enable or disable switch
* automatically
*/
bool bL_check_auto_switcher_enable(void)
{
bool result = true;
if (switch_operation != 0x11)
result = false;
return result;
}
/*
* bL_switch_request - Switch to a specific cluster for the given CPU
*
* @cpu: the CPU to switch
* @new_cluster_id: the ID of the cluster to switch to.
*
* This function causes a cluster switch on the given CPU. If the given
* CPU is the same as the calling CPU then the switch happens right away.
* Otherwise the request is put on a work queue to be scheduled on the
* remote CPU.
*/
void bL_switch_request(unsigned int cpu, unsigned int new_cluster_id)
{
struct bL_thread *t;
if (switch_operation == 0x00)
return;
if (cpu >= BL_CPUS_PER_CLUSTER) {
pr_err("%s: cpu %d out of bounds\n", __func__, cpu);
return;
}
t = &bL_threads[cpu];
if (IS_ERR_OR_NULL(t->task)) {
pr_err("%s: cpu %d out of bounds\n", __func__, cpu);
return;
}
t->wanted_cluster = new_cluster_id;
wake_up(&t->wq);
}
EXPORT_SYMBOL_GPL(bL_switch_request);
int bL_cluster_switch_request(unsigned int new_cluster)
{
struct bL_thread *t;
int cpu;
int ret;
BUG_ON(new_cluster >= 2);
if (unlikely(switch_operation == 0x00))
return -EPERM;
get_online_cpus();
spin_lock(&switch_ready_lock);
switch_ready = -1;
spin_unlock(&switch_ready_lock);
local_irq_disable();
if (bL_enter_migration() < 0) {
local_irq_enable();
put_online_cpus();
return -EBUSY;
}
for (cpu = BL_CPUS_PER_CLUSTER - 1; cpu >= 0; cpu--) {
if (unlikely(!cpu_online(cpu)))
continue;
t = &bL_threads[cpu];
if (unlikely(IS_ERR_OR_NULL(t->task))) {
pr_err("%s: cpu %d out of bounds\n", __func__, cpu);
local_irq_enable();
put_online_cpus();
return -EINVAL;
}
t->wanted_cluster = new_cluster;
wake_up(&t->wq);
smp_send_reschedule(cpu);
}
local_irq_enable();
schedule();
put_online_cpus();
bL_exit_migration();
ret = ((read_mpidr() >> 8) & 0xf) == new_cluster ? 0 : -EAGAIN;
return ret;
}
EXPORT_SYMBOL_GPL(bL_cluster_switch_request);
#ifdef CONFIG_BL_SWITCHER_DUMMY_IF
/*
* Dummy interface to user space (to be replaced by cpufreq based interface).
*/
#include <linux/fs.h>
#include <linux/miscdevice.h>
#include <asm/uaccess.h>
static ssize_t bL_switcher_write(struct file *file, const char __user *buf,
size_t len, loff_t *pos)
{
unsigned char val[3];
unsigned int cpu, cluster;
pr_debug("%s\n", __func__);
if (len < 3)
return -EINVAL;
if (copy_from_user(val, buf, 3))
return -EFAULT;
/* format: <cpu#>,<cluster#> */
if (val[0] < '0' || val[0] > '4' ||
val[1] != ',' ||
val[2] < '0' || val[2] > '1')
return -EINVAL;
cpu = val[0] - '0';
cluster = val[2] - '0';
if (cpu_online(cpu))
bL_switch_request(cpu, cluster);
return len;
}
static const struct file_operations bL_switcher_fops = {
.write = bL_switcher_write,
.owner = THIS_MODULE,
};
static struct miscdevice bL_switcher_device = {
MISC_DYNAMIC_MINOR,
"b.L_switcher",
&bL_switcher_fops
};
static ssize_t bL_operator_write(struct file *file, const char __user *buf,
size_t len, loff_t *pos)
{
char val[2];
unsigned int loop;
if (copy_from_user(val, buf, 2))
return -EINVAL;
if (val[0] < '0' || val[0] > '1')
goto cmd_err;
else if (val[1] < '0' || val[1] > '1')
goto cmd_err;
if (!strncmp(val, "00", 2)) {
pr_info("Disable switcher\n");
switch_operation = 0x00;
goto end;
}
if (!strncmp(val, "01", 2)) {
pr_info("LITTLE only\n");
switch_operation = 0x01;
for (loop = 0; loop < BL_CPUS_PER_CLUSTER; loop++) {
if (bL_running_cluster_num_cpus(loop) == 0)
bL_switch_request(loop, 1);
}
goto end;
}
if (!strncmp(val, "10", 2)) {
pr_info("big only\n");
switch_operation = 0x10;
for (loop = 0; loop < BL_CPUS_PER_CLUSTER; loop++) {
if (bL_running_cluster_num_cpus(loop) == 1)
bL_switch_request(loop, 0);
}
goto end;
}
if (!strncmp(val, "11", 2)) {
pr_info("big.LITTLE(switcher enable)\n");
switch_operation = 0x11;
goto end;
}
cmd_err:
pr_info("Usage: command > /dev/bL_operator\n"
"command : 00 - switch disable\n"
" 01 - LITTLE only\n"
" 10 - big only\n"
" 11 - big.LITTLE\n"
"echo 10 > /dev/bL_operator\n");
end:
return len;
}
static ssize_t bL_operator_read(struct file *file, char __user *buf,
size_t len, loff_t *pos)
{
char buff[20];
size_t count = 0;
switch (switch_operation) {
case 0x00:
count += sprintf(buff, "Disable switcher\n");
break;
case 0x01:
count += sprintf(buff, "LITTLE only\n");
break;
case 0x10:
count += sprintf(buff, "big only\n");
break;
case 0x11:
count += sprintf(buff, "big.LITTLE\n");
break;
default:
count += sprintf(buff, "Not support operation mode\n");
break;
}
return simple_read_from_buffer(buf, len, pos, buff, count);
}
static const struct file_operations bL_operator_fops = {
.write = bL_operator_write,
.read = bL_operator_read,
.owner = THIS_MODULE,
};
static struct miscdevice bL_operator_device = {
MISC_DYNAMIC_MINOR,
"b.L_operator",
&bL_operator_fops
};
#endif
static void __init switcher_thread_on_each_cpu(struct work_struct *work)
{
unsigned int mpidr, cluster, cpuid;
mpidr = read_mpidr();
cluster = (mpidr >> 8) & 0xf;
cpuid = mpidr & 0xf;
BUG_ON(cluster > 2 || cpuid > 4);
pr_debug("create switcher thread %d(%d)\n", cpuid, cluster);
bL_switcher_thread_create(cpuid, &bL_threads[cpuid]);
}
int __init bL_switcher_init(const struct bL_power_ops *ops)
{
int ret, err;
pr_info("big.LITTLE switcher initializing\n");
ret = bL_cluster_sync_init(ops);
if (ret)
return ret;
bL_platform_ops = ops;
#ifdef CONFIG_BL_SWITCHER_DUMMY_IF
err = misc_register(&bL_switcher_device);
if (err) {
pr_err("Switcher device is not registered "
"so user can not execute the manual switch");
return err;
}
err = misc_register(&bL_operator_device);
if (err) {
pr_err("Switcher operation device is not registerd "
"so bL_operation is not accessed\n");
return err;
}
#endif
schedule_on_each_cpu(switcher_thread_on_each_cpu);
pr_info("big.LITTLE switcher initialized\n");
return 0;
}

View File

@@ -0,0 +1,96 @@
/*
* vlock.S - simple voting lock implementation for ARM
*
* Created by: Dave Martin, 2012-08-16
* Copyright: (C) 2012 Linaro Limited
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License along
* with this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#include <linux/linkage.h>
#include <asm/bL_vlock.h>
#if VLOCK_VOTING_SIZE > 4
#define FEW(x...)
#define MANY(x...) x
#else
#define FEW(x...) x
#define MANY(x...)
#endif
@ voting lock for first-man coordination
.macro voting_begin rcpu:req, rbase:req, rscratch:req
mov \rscratch, #1
strb \rscratch, [\rbase, \rcpu]
.endm
.macro voting_end rcpu:req, rbase:req, rscratch:req
mov \rscratch, #0
strb \rscratch, [\rbase, \rcpu]
dsb
sev
.endm
@ r0: lock structure base
@ r1: CPU ID (0-based index within cluster)
ENTRY(vlock_trylock)
add r1, r1, #VLOCK_VOTING_OFFSET
voting_begin r0, r1, r2
ldrb r2, [r0, #VLOCK_OWNER_OFFSET] @ check whether lock is held
cmp r2, #VLOCK_OWNER_NONE
bne trylock_fail @ fail if so
strb r1, [r0, #VLOCK_OWNER_OFFSET] @ submit my vote
voting_end r0, r1, r2
@ Wait for the current round of voting to finish:
MANY( mov r3, #VLOCK_VOTING_OFFSET )
0:
MANY( ldr r2, [r0, r3] )
FEW( ldr r2, [r0, #VLOCK_VOTING_OFFSET] )
cmp r2, #0
wfene
bne 0b
MANY( add r3, r3, #4 )
MANY( cmp r3, #VLOCK_VOTING_OFFSET + VLOCK_VOTING_SIZE )
MANY( bne 0b )
@ Check who won:
ldrb r2, [r0, #VLOCK_OWNER_OFFSET]
eor r0, r1, r2 @ zero if I won, else nonzero
bx lr
trylock_fail:
voting_end r0, r1, r2
mov r0, #1 @ nonzero indicates that I lost
bx lr
ENDPROC(vlock_trylock)
@ r0: lock structure base
ENTRY(vlock_unlock)
mov r1, #VLOCK_OWNER_NONE
dsb
strb r1, [r0, #VLOCK_OWNER_OFFSET]
dsb
sev
mov pc, lr
ENDPROC(vlock_unlock)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,94 @@
/*
* arch/arm/common/fiq_debugger_ringbuf.c
*
* simple lockless ringbuffer
*
* Copyright (C) 2010 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/slab.h>
struct fiq_debugger_ringbuf {
int len;
int head;
int tail;
u8 buf[];
};
static inline struct fiq_debugger_ringbuf *fiq_debugger_ringbuf_alloc(int len)
{
struct fiq_debugger_ringbuf *rbuf;
rbuf = kzalloc(sizeof(*rbuf) + len, GFP_KERNEL);
if (rbuf == NULL)
return NULL;
rbuf->len = len;
rbuf->head = 0;
rbuf->tail = 0;
smp_mb();
return rbuf;
}
static inline void fiq_debugger_ringbuf_free(struct fiq_debugger_ringbuf *rbuf)
{
kfree(rbuf);
}
static inline int fiq_debugger_ringbuf_level(struct fiq_debugger_ringbuf *rbuf)
{
int level = rbuf->head - rbuf->tail;
if (level < 0)
level = rbuf->len + level;
return level;
}
static inline int fiq_debugger_ringbuf_room(struct fiq_debugger_ringbuf *rbuf)
{
return rbuf->len - fiq_debugger_ringbuf_level(rbuf) - 1;
}
static inline u8
fiq_debugger_ringbuf_peek(struct fiq_debugger_ringbuf *rbuf, int i)
{
return rbuf->buf[(rbuf->tail + i) % rbuf->len];
}
static inline int
fiq_debugger_ringbuf_consume(struct fiq_debugger_ringbuf *rbuf, int count)
{
count = min(count, fiq_debugger_ringbuf_level(rbuf));
rbuf->tail = (rbuf->tail + count) % rbuf->len;
smp_mb();
return count;
}
static inline int
fiq_debugger_ringbuf_push(struct fiq_debugger_ringbuf *rbuf, u8 datum)
{
if (fiq_debugger_ringbuf_room(rbuf) == 0)
return 0;
rbuf->buf[rbuf->head] = datum;
smp_mb();
rbuf->head = (rbuf->head + 1) % rbuf->len;
smp_mb();
return 1;
}

111
arch/arm/common/fiq_glue.S Normal file
View File

@@ -0,0 +1,111 @@
/*
* Copyright (C) 2008 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#include <linux/linkage.h>
#include <asm/assembler.h>
.text
.global fiq_glue_end
/* fiq stack: r0-r15,cpsr,spsr of interrupted mode */
ENTRY(fiq_glue)
/* store pc, cpsr from previous mode */
mrs r12, spsr
sub r11, lr, #4
subs r10, #1
bne nested_fiq
stmfd sp!, {r11-r12, lr}
/* store r8-r14 from previous mode */
sub sp, sp, #(7 * 4)
stmia sp, {r8-r14}^
nop
/* store r0-r7 from previous mode */
stmfd sp!, {r0-r7}
/* setup func(data,regs) arguments */
mov r0, r9
mov r1, sp
mov r3, r8
mov r7, sp
/* Get sp and lr from non-user modes */
and r4, r12, #MODE_MASK
cmp r4, #USR_MODE
beq fiq_from_usr_mode
mov r7, sp
orr r4, r4, #(PSR_I_BIT | PSR_F_BIT)
msr cpsr_c, r4
str sp, [r7, #(4 * 13)]
str lr, [r7, #(4 * 14)]
mrs r5, spsr
str r5, [r7, #(4 * 17)]
cmp r4, #(SVC_MODE | PSR_I_BIT | PSR_F_BIT)
/* use fiq stack if we reenter this mode */
subne sp, r7, #(4 * 3)
fiq_from_usr_mode:
msr cpsr_c, #(SVC_MODE | PSR_I_BIT | PSR_F_BIT)
mov r2, sp
sub sp, r7, #12
stmfd sp!, {r2, ip, lr}
/* call func(data,regs) */
blx r3
ldmfd sp, {r2, ip, lr}
mov sp, r2
/* restore/discard saved state */
cmp r4, #USR_MODE
beq fiq_from_usr_mode_exit
msr cpsr_c, r4
ldr sp, [r7, #(4 * 13)]
ldr lr, [r7, #(4 * 14)]
msr spsr_cxsf, r5
fiq_from_usr_mode_exit:
msr cpsr_c, #(FIQ_MODE | PSR_I_BIT | PSR_F_BIT)
ldmfd sp!, {r0-r7}
add sp, sp, #(7 * 4)
ldmfd sp!, {r11-r12, lr}
exit_fiq:
msr spsr_cxsf, r12
add r10, #1
movs pc, r11
nested_fiq:
orr r12, r12, #(PSR_F_BIT)
b exit_fiq
fiq_glue_end:
ENTRY(fiq_glue_setup) /* func, data, sp */
mrs r3, cpsr
msr cpsr_c, #(FIQ_MODE | PSR_I_BIT | PSR_F_BIT)
movs r8, r0
mov r9, r1
mov sp, r2
moveq r10, #0
movne r10, #1
msr cpsr_c, r3
bx lr

View File

@@ -0,0 +1,100 @@
/*
* Copyright (C) 2010 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/percpu.h>
#include <linux/slab.h>
#include <asm/fiq.h>
#include <asm/fiq_glue.h>
extern unsigned char fiq_glue, fiq_glue_end;
extern void fiq_glue_setup(void *func, void *data, void *sp);
static struct fiq_handler fiq_debbuger_fiq_handler = {
.name = "fiq_glue",
};
DEFINE_PER_CPU(void *, fiq_stack);
static struct fiq_glue_handler *current_handler;
static DEFINE_MUTEX(fiq_glue_lock);
static void fiq_glue_setup_helper(void *info)
{
struct fiq_glue_handler *handler = info;
fiq_glue_setup(handler->fiq, handler,
__get_cpu_var(fiq_stack) + THREAD_START_SP);
}
int fiq_glue_register_handler(struct fiq_glue_handler *handler)
{
int ret;
int cpu;
if (!handler || !handler->fiq)
return -EINVAL;
mutex_lock(&fiq_glue_lock);
if (fiq_stack) {
ret = -EBUSY;
goto err_busy;
}
for_each_possible_cpu(cpu) {
void *stack;
stack = (void *)__get_free_pages(GFP_KERNEL, THREAD_SIZE_ORDER);
if (WARN_ON(!stack)) {
ret = -ENOMEM;
goto err_alloc_fiq_stack;
}
per_cpu(fiq_stack, cpu) = stack;
}
ret = claim_fiq(&fiq_debbuger_fiq_handler);
if (WARN_ON(ret))
goto err_claim_fiq;
current_handler = handler;
on_each_cpu(fiq_glue_setup_helper, handler, true);
set_fiq_handler(&fiq_glue, &fiq_glue_end - &fiq_glue);
mutex_unlock(&fiq_glue_lock);
return 0;
err_claim_fiq:
err_alloc_fiq_stack:
for_each_possible_cpu(cpu) {
__free_pages(per_cpu(fiq_stack, cpu), THREAD_SIZE_ORDER);
per_cpu(fiq_stack, cpu) = NULL;
}
err_busy:
mutex_unlock(&fiq_glue_lock);
return ret;
}
/**
* fiq_glue_resume - Restore fiqs after suspend or low power idle states
*
* This must be called before calling local_fiq_enable after returning from a
* power state where the fiq mode registers were lost. If a driver provided
* a resume hook when it registered the handler it will be called.
*/
void fiq_glue_resume(void)
{
if (!current_handler)
return;
fiq_glue_setup(current_handler->fiq, current_handler,
__get_cpu_var(fiq_stack) + THREAD_START_SP);
if (current_handler->resume)
current_handler->resume(current_handler);
}

View File

@@ -59,6 +59,7 @@ struct gic_chip_data {
u32 saved_spi_target[DIV_ROUND_UP(1020, 4)];
u32 __percpu *saved_ppi_enable;
u32 __percpu *saved_ppi_conf;
u32 __percpu *saved_sgi_pending;
#endif
struct irq_domain *domain;
unsigned int gic_irqs;
@@ -69,6 +70,13 @@ struct gic_chip_data {
static DEFINE_RAW_SPINLOCK(irq_controller_lock);
/*
* The GIC mapping of CPU interfaces does not necessarily match
* the logical CPU numbering. Let's use a mapping as returned
* by the GIC itself.
*/
static u8 gic_cpu_map[8] __read_mostly;
/*
* Supported arch specific GIC irq extension.
* Default make them NULL.
@@ -86,6 +94,10 @@ struct irq_chip gic_arch_extn = {
#define MAX_GIC_NR 1
#endif
#if defined(CONFIG_BL_SWITCHER)
DEFINE_PER_CPU(bool, is_switching);
#endif
static struct gic_chip_data gic_data[MAX_GIC_NR] __read_mostly;
#ifdef CONFIG_GIC_NON_BANKED
@@ -241,10 +253,9 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val,
if (cpu >= 8 || cpu >= nr_cpu_ids)
return -EINVAL;
mask = 0xff << shift;
bit = 1 << (cpu_logical_map(cpu) + shift);
raw_spin_lock(&irq_controller_lock);
mask = 0xff << shift;
bit = gic_cpu_map[cpu] << shift;
val = readl_relaxed(reg) & ~mask;
writel_relaxed(val | bit, reg);
raw_spin_unlock(&irq_controller_lock);
@@ -349,11 +360,6 @@ static void __init gic_dist_init(struct gic_chip_data *gic)
u32 cpumask;
unsigned int gic_irqs = gic->gic_irqs;
void __iomem *base = gic_data_dist_base(gic);
u32 cpu = cpu_logical_map(smp_processor_id());
cpumask = 1 << cpu;
cpumask |= cpumask << 8;
cpumask |= cpumask << 16;
writel_relaxed(0, base + GIC_DIST_CTRL);
@@ -366,6 +372,7 @@ static void __init gic_dist_init(struct gic_chip_data *gic)
/*
* Set all global interrupts to this CPU only.
*/
cpumask = readl_relaxed(base + GIC_DIST_TARGET + 0);
for (i = 32; i < gic_irqs; i += 4)
writel_relaxed(cpumask, base + GIC_DIST_TARGET + i * 4 / 4);
@@ -389,8 +396,24 @@ static void __cpuinit gic_cpu_init(struct gic_chip_data *gic)
{
void __iomem *dist_base = gic_data_dist_base(gic);
void __iomem *base = gic_data_cpu_base(gic);
unsigned int cpu_mask, cpu = smp_processor_id();
int i;
/*
* Get what the GIC says our CPU mask is.
*/
BUG_ON(cpu >= 8);
cpu_mask = readl_relaxed(dist_base + GIC_DIST_TARGET + 0);
gic_cpu_map[cpu] = cpu_mask;
/*
* Clear our mask from the other map entries in case they're
* still undefined.
*/
for (i = 0; i < 8; i++)
if (i != cpu)
gic_cpu_map[i] &= ~cpu_mask;
/*
* Deal with the banked PPI and SGI interrupts - disable all
* PPI interrupts, ensure all SGI interrupts are enabled.
@@ -406,6 +429,10 @@ static void __cpuinit gic_cpu_init(struct gic_chip_data *gic)
writel_relaxed(0xf0, base + GIC_CPU_PRIMASK);
writel_relaxed(1, base + GIC_CPU_CTRL);
#if defined(CONFIG_BL_SWITCHER)
per_cpu(is_switching, cpu) = false;
#endif
}
#ifdef CONFIG_CPU_PM
@@ -503,13 +530,24 @@ static void gic_cpu_save(unsigned int gic_nr)
return;
ptr = __this_cpu_ptr(gic_data[gic_nr].saved_ppi_enable);
for (i = 0; i < DIV_ROUND_UP(32, 32); i++)
for (i = 0; i < DIV_ROUND_UP(32, 32); i++) {
ptr[i] = readl_relaxed(dist_base + GIC_DIST_ENABLE_SET + i * 4);
writel_relaxed(ptr[i], dist_base + GIC_DIST_ENABLE_CLEAR + i * 4);
}
ptr = __this_cpu_ptr(gic_data[gic_nr].saved_ppi_conf);
for (i = 0; i < DIV_ROUND_UP(32, 16); i++)
ptr[i] = readl_relaxed(dist_base + GIC_DIST_CONFIG + i * 4);
#if defined(CONFIG_BL_SWITCHER)
if (per_cpu(is_switching, smp_processor_id()) == true) {
ptr = __this_cpu_ptr(gic_data[gic_nr].saved_sgi_pending);
for (i = 0; i < DIV_ROUND_UP(16, 4); i++) {
ptr[i] = readl_relaxed(dist_base + GIC_DIST_SGI_PENDING_SET + i * 4);
writel_relaxed(ptr[i], dist_base + GIC_DIST_SGI_PENDING_CLEAR + i * 4);
}
}
#endif
}
static void gic_cpu_restore(unsigned int gic_nr)
@@ -539,6 +577,14 @@ static void gic_cpu_restore(unsigned int gic_nr)
for (i = 0; i < DIV_ROUND_UP(32, 4); i++)
writel_relaxed(0xa0a0a0a0, dist_base + GIC_DIST_PRI + i * 4);
#if defined(CONFIG_BL_SWITCHER)
if (per_cpu(is_switching, smp_processor_id()) == true) {
ptr = __this_cpu_ptr(gic_data[gic_nr].saved_sgi_pending);
for (i = 0; i < DIV_ROUND_UP(16, 4); i++)
writel_relaxed(ptr[i], dist_base + GIC_DIST_SGI_PENDING_SET + i * 4);
per_cpu(is_switching, smp_processor_id()) = false;
}
#endif
writel_relaxed(0xf0, cpu_base + GIC_CPU_PRIMASK);
writel_relaxed(1, cpu_base + GIC_CPU_CTRL);
}
@@ -588,6 +634,10 @@ static void __init gic_pm_init(struct gic_chip_data *gic)
sizeof(u32));
BUG_ON(!gic->saved_ppi_conf);
gic->saved_sgi_pending = __alloc_percpu(DIV_ROUND_UP(16, 4) * 4,
sizeof(u32));
BUG_ON(!gic->saved_sgi_pending);
if (gic == &gic_data[0])
cpu_pm_register_notifier(&gic_notifier_block);
}
@@ -646,7 +696,7 @@ void __init gic_init_bases(unsigned int gic_nr, int irq_start,
{
irq_hw_number_t hwirq_base;
struct gic_chip_data *gic;
int gic_irqs, irq_base;
int gic_irqs, irq_base, i;
BUG_ON(gic_nr >= MAX_GIC_NR);
@@ -682,6 +732,13 @@ void __init gic_init_bases(unsigned int gic_nr, int irq_start,
gic_set_base_accessor(gic, gic_get_common_base);
}
/*
* Initialize the CPU interface map to all CPUs.
* It will be refined as each CPU probes its ID.
*/
for (i = 0; i < 8; i++)
gic_cpu_map[i] = 0xff;
/*
* For primary GICs, skip over SGIs.
* For secondary GICs, skip over PPIs, too.
@@ -733,11 +790,13 @@ void __cpuinit gic_secondary_init(unsigned int gic_nr)
void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
{
int cpu;
unsigned long map = 0;
unsigned long map = 0, flags;
raw_spin_lock_irqsave(&irq_controller_lock, flags);
/* Convert our logical CPU mask into a physical one. */
for_each_cpu(cpu, mask)
map |= 1 << cpu_logical_map(cpu);
map |= gic_cpu_map[cpu];
/*
* Ensure that stores to Normal memory are visible to the
@@ -747,6 +806,58 @@ void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
/* this always happens on GIC0 */
writel_relaxed(map << 16 | irq, gic_data_dist_base(&gic_data[0]) + GIC_DIST_SOFTINT);
raw_spin_unlock_irqrestore(&irq_controller_lock, flags);
}
#endif
#ifdef CONFIG_BL_SWITCHER
/*
* gic_migrate_target - migrate IRQs to another PU interface
*
* @new_cpu_id: the CPU target ID to migrate IRQs to
*
* Migrate all peripheral interrupts with a target matching the current CPU
* to the interface corresponding to @new_cpu_id. The CPU interface mapping
* is also updated. Targets to other CPU interfaces are unchanged.
* This must be called with IRQs locally disabled.
*/
void gic_migrate_target(unsigned int new_cpu_id)
{
unsigned int old_cpu_id, gic_irqs, gic_nr = 0;
void __iomem *dist_base;
int i, ror_val, cpu = smp_processor_id();
u32 val, old_mask, active_mask;
if (gic_nr >= MAX_GIC_NR)
BUG();
dist_base = gic_data_dist_base(&gic_data[gic_nr]);
if (!dist_base)
return;
gic_irqs = gic_data[gic_nr].gic_irqs;
old_cpu_id = __ffs(gic_cpu_map[cpu]);
old_mask = 0x01010101 << old_cpu_id;
ror_val = (old_cpu_id - new_cpu_id) & 31;
raw_spin_lock(&irq_controller_lock);
per_cpu(is_switching, cpu) = true;
gic_cpu_map[cpu] = 1 << new_cpu_id;
for (i = 8; i < DIV_ROUND_UP(gic_irqs, 4); i++) {
val = readl_relaxed(dist_base + GIC_DIST_TARGET + i * 4);
active_mask = val & old_mask;
if (active_mask) {
val &= ~active_mask;
val |= ror32(active_mask, ror_val);
writel_relaxed(val, dist_base + GIC_DIST_TARGET + i * 4);
}
}
raw_spin_unlock(&irq_controller_lock);
}
#endif

View File

@@ -0,0 +1,308 @@
CONFIG_EXPERIMENTAL=y
# CONFIG_SWAP is not set
CONFIG_SYSVIPC=y
CONFIG_CGROUPS=y
CONFIG_CGROUP_DEBUG=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_RESOURCE_COUNTERS=y
CONFIG_CGROUP_SCHED=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_PANIC_TIMEOUT=5
CONFIG_KALLSYMS_ALL=y
# CONFIG_AIO is not set
CONFIG_EMBEDDED=y
# CONFIG_SLUB_DEBUG is not set
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_PARTITION_ADVANCED=y
CONFIG_EFI_PARTITION=y
CONFIG_ARCH_EXYNOS=y
CONFIG_S3C_LOWLEVEL_UART_PORT=2
CONFIG_S3C_ADC=y
CONFIG_S3C24XX_PWM=y
CONFIG_ARCH_EXYNOS5=y
CONFIG_EXYNOS_FIQ_DEBUGGER=y
CONFIG_MACH_SMDK5250=y
CONFIG_ARM_TRUSTZONE=y
CONFIG_FIQ_DEBUGGER=y
CONFIG_FIQ_DEBUGGER_NO_SLEEP=y
CONFIG_FIQ_DEBUGGER_CONSOLE=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_SMP=y
CONFIG_NR_CPUS=2
CONFIG_PREEMPT=y
CONFIG_AEABI=y
CONFIG_HIGHMEM=y
CONFIG_CMDLINE="vmalloc=512M"
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_DEFAULT_GOV_INTERACTIVE=y
CONFIG_VFP=y
CONFIG_NEON=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_BINFMT_MISC=y
CONFIG_WAKELOCK=y
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_NET_KEY=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_INET_ESP=y
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_INET_LRO is not set
CONFIG_IPV6=y
CONFIG_IPV6_PRIVACY=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_OPTIMISTIC_DAD=y
CONFIG_INET6_AH=y
CONFIG_INET6_ESP=y
CONFIG_INET6_IPCOMP=y
CONFIG_IPV6_MIP6=y
CONFIG_IPV6_TUNNEL=y
CONFIG_IPV6_MULTIPLE_TABLES=y
CONFIG_NETFILTER=y
CONFIG_NF_CONNTRACK=y
CONFIG_NF_CONNTRACK_EVENTS=y
CONFIG_NF_CT_PROTO_DCCP=y
CONFIG_NF_CT_PROTO_SCTP=y
CONFIG_NF_CT_PROTO_UDPLITE=y
CONFIG_NF_CONNTRACK_AMANDA=y
CONFIG_NF_CONNTRACK_FTP=y
CONFIG_NF_CONNTRACK_H323=y
CONFIG_NF_CONNTRACK_IRC=y
CONFIG_NF_CONNTRACK_NETBIOS_NS=y
CONFIG_NF_CONNTRACK_PPTP=y
CONFIG_NF_CONNTRACK_SANE=y
CONFIG_NF_CONNTRACK_TFTP=y
CONFIG_NF_CT_NETLINK=y
CONFIG_NETFILTER_TPROXY=y
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=y
CONFIG_NETFILTER_XT_TARGET_CONNMARK=y
CONFIG_NETFILTER_XT_TARGET_MARK=y
CONFIG_NETFILTER_XT_TARGET_NFLOG=y
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=y
CONFIG_NETFILTER_XT_TARGET_TPROXY=y
CONFIG_NETFILTER_XT_TARGET_TRACE=y
CONFIG_NETFILTER_XT_MATCH_COMMENT=y
CONFIG_NETFILTER_XT_MATCH_CONNBYTES=y
CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=y
CONFIG_NETFILTER_XT_MATCH_CONNMARK=y
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=y
CONFIG_NETFILTER_XT_MATCH_HELPER=y
CONFIG_NETFILTER_XT_MATCH_IPRANGE=y
CONFIG_NETFILTER_XT_MATCH_LENGTH=y
CONFIG_NETFILTER_XT_MATCH_LIMIT=y
CONFIG_NETFILTER_XT_MATCH_MAC=y
CONFIG_NETFILTER_XT_MATCH_MARK=y
CONFIG_NETFILTER_XT_MATCH_POLICY=y
CONFIG_NETFILTER_XT_MATCH_PKTTYPE=y
CONFIG_NETFILTER_XT_MATCH_QTAGUID=y
CONFIG_NETFILTER_XT_MATCH_QUOTA=y
CONFIG_NETFILTER_XT_MATCH_QUOTA2=y
CONFIG_NETFILTER_XT_MATCH_QUOTA2_LOG=y
CONFIG_NETFILTER_XT_MATCH_SOCKET=y
CONFIG_NETFILTER_XT_MATCH_STATE=y
CONFIG_NETFILTER_XT_MATCH_STATISTIC=y
CONFIG_NETFILTER_XT_MATCH_STRING=y
CONFIG_NETFILTER_XT_MATCH_TIME=y
CONFIG_NETFILTER_XT_MATCH_U32=y
CONFIG_NF_CONNTRACK_IPV4=y
CONFIG_IP_NF_IPTABLES=y
CONFIG_IP_NF_MATCH_AH=y
CONFIG_IP_NF_MATCH_ECN=y
CONFIG_IP_NF_MATCH_TTL=y
CONFIG_IP_NF_FILTER=y
CONFIG_IP_NF_TARGET_REJECT=y
CONFIG_IP_NF_TARGET_REJECT_SKERR=y
CONFIG_IP_NF_TARGET_LOG=y
CONFIG_NF_NAT=y
CONFIG_IP_NF_TARGET_MASQUERADE=y
CONFIG_IP_NF_TARGET_NETMAP=y
CONFIG_IP_NF_TARGET_REDIRECT=y
CONFIG_IP_NF_MANGLE=y
CONFIG_IP_NF_RAW=y
CONFIG_IP_NF_ARPTABLES=y
CONFIG_IP_NF_ARPFILTER=y
CONFIG_IP_NF_ARP_MANGLE=y
CONFIG_NF_CONNTRACK_IPV6=y
CONFIG_IP6_NF_IPTABLES=y
CONFIG_IP6_NF_TARGET_LOG=y
CONFIG_IP6_NF_FILTER=y
CONFIG_IP6_NF_TARGET_REJECT=y
CONFIG_IP6_NF_TARGET_REJECT_SKERR=y
CONFIG_IP6_NF_MANGLE=y
CONFIG_IP6_NF_RAW=y
CONFIG_PHONET=y
CONFIG_NET_SCHED=y
CONFIG_NET_SCH_HTB=y
CONFIG_NET_SCH_INGRESS=y
CONFIG_NET_CLS_U32=y
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_U32=y
CONFIG_NET_CLS_ACT=y
CONFIG_NET_ACT_POLICE=y
CONFIG_NET_ACT_GACT=y
CONFIG_NET_ACT_MIRRED=y
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_SIZE=8192
CONFIG_UID_STAT=y
CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_SG=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
CONFIG_DM_UEVENT=y
CONFIG_INPUT_EVDEV=y
CONFIG_INPUT_KEYRESET=y
# CONFIG_INPUT_MOUSE is not set
CONFIG_INPUT_TOUCHSCREEN=y
CONFIG_TOUCHSCREEN_EGALAX_I2C=y
CONFIG_INPUT_MISC=y
CONFIG_INPUT_KEYCHORD=y
CONFIG_INPUT_UINPUT=y
CONFIG_INPUT_GPIO=y
# CONFIG_VT is not set
# CONFIG_LEGACY_PTYS is not set
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_SAMSUNG=y
CONFIG_SERIAL_SAMSUNG_CONSOLE=y
CONFIG_HW_RANDOM=y
CONFIG_I2C=y
CONFIG_I2C_CHARDEV=y
CONFIG_I2C_S3C2410=y
CONFIG_POWER_SUPPLY=y
# CONFIG_HWMON is not set
CONFIG_WATCHDOG=y
CONFIG_S3C2410_WATCHDOG=y
CONFIG_MFD_MAX8997=y
CONFIG_MEDIA_SUPPORT=y
CONFIG_VIDEO_DEV=y
# CONFIG_MEDIA_TUNER_SIMPLE is not set
# CONFIG_MEDIA_TUNER_TDA8290 is not set
# CONFIG_MEDIA_TUNER_TDA827X is not set
# CONFIG_MEDIA_TUNER_TDA18271 is not set
# CONFIG_MEDIA_TUNER_TDA9887 is not set
# CONFIG_MEDIA_TUNER_TEA5761 is not set
# CONFIG_MEDIA_TUNER_TEA5767 is not set
# CONFIG_MEDIA_TUNER_MT20XX is not set
# CONFIG_MEDIA_TUNER_MT2060 is not set
# CONFIG_MEDIA_TUNER_MT2063 is not set
# CONFIG_MEDIA_TUNER_MT2266 is not set
# CONFIG_MEDIA_TUNER_MT2131 is not set
# CONFIG_MEDIA_TUNER_QT1010 is not set
# CONFIG_MEDIA_TUNER_XC2028 is not set
# CONFIG_MEDIA_TUNER_XC5000 is not set
# CONFIG_MEDIA_TUNER_XC4000 is not set
# CONFIG_MEDIA_TUNER_MXL5005S is not set
# CONFIG_MEDIA_TUNER_MXL5007T is not set
# CONFIG_MEDIA_TUNER_MC44S803 is not set
# CONFIG_MEDIA_TUNER_MAX2165 is not set
# CONFIG_MEDIA_TUNER_TDA18218 is not set
# CONFIG_MEDIA_TUNER_TDA18212 is not set
CONFIG_VIDEO_EXYNOS=y
# CONFIG_VIDEO_EXYNOS_FIMC_LITE is not set
# CONFIG_VIDEO_EXYNOS_MIPI_CSIS is not set
CONFIG_VIDEO_EXYNOS_GSCALER=y
CONFIG_VIDEO_EXYNOS_JPEG=y
CONFIG_VIDEO_EXYNOS_FIMG2D=y
CONFIG_VIDEO_EXYNOS_MFC=y
CONFIG_VIDEO_EXYNOS_TV=y
CONFIG_VIDEO_EXYNOS_HDMI_CEC=y
CONFIG_VIDEO_EXYNOS_ROTATOR=y
CONFIG_VIDEO_EXYNOS5_FIMC_IS=y
CONFIG_VIDEO_S5K4E5=y
CONFIG_VIDEO_S5K6A3=y
CONFIG_ION=y
CONFIG_ION_EXYNOS=y
CONFIG_ION_EXYNOS_CONTIGHEAP_SIZE=100000
CONFIG_MALI_T6XX=y
CONFIG_MALI_LICENSE_IS_GPL=y
CONFIG_MALI_PLATFORM_FAKE=y
CONFIG_MALI_T6XX_ENABLE_TRACE=y
CONFIG_MALI_PLATFORM_THIRDPARTY=y
CONFIG_MALI_PLATFORM_THIRDPARTY_NAME="exynos5"
CONFIG_MALI_T6XX_DVFS=y
CONFIG_MALI_T6XX_DEBUG_SYS=y
CONFIG_MALI_T6XX_RT_PM=y
# CONFIG_MALI_GATOR_SUPPORT is not set
# CONFIG_MALI_EXPERT is not set
CONFIG_FB=y
CONFIG_FB_S3C=y
CONFIG_FB_MIPI_DSIM=y
CONFIG_BACKLIGHT_LCD_SUPPORT=y
CONFIG_LCD_CLASS_DEVICE=y
CONFIG_LCD_PLATFORM=y
CONFIG_LCD_MIPI_TC358764=y
CONFIG_BACKLIGHT_CLASS_DEVICE=y
# CONFIG_BACKLIGHT_GENERIC is not set
CONFIG_BACKLIGHT_PWM=y
CONFIG_SOUND=y
CONFIG_SND=y
# CONFIG_SND_DRIVERS is not set
# CONFIG_SND_ARM is not set
CONFIG_SND_SOC=y
CONFIG_SND_SOC_SAMSUNG=y
CONFIG_SND_SOC_SAMSUNG_SMDK_WM8994=y
# CONFIG_HID_SUPPORT is not set
CONFIG_USB_GADGET=y
CONFIG_USB_EXYNOS_SS_UDC=y
CONFIG_USB_G_ANDROID=y
CONFIG_MMC=y
CONFIG_MMC_UNSAFE_RESUME=y
CONFIG_MMC_CLKGATE=y
CONFIG_MMC_EMBEDDED_SDIO=y
CONFIG_MMC_PARANOID_SD_INIT=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_S3C=y
CONFIG_MMC_SDHCI_S3C_DMA=y
CONFIG_MMC_DW=y
CONFIG_MMC_DW_IDMAC=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_S3C=y
CONFIG_STAGING=y
CONFIG_ANDROID=y
CONFIG_ANDROID_BINDER_IPC=y
CONFIG_ASHMEM=y
CONFIG_ANDROID_LOGGER=y
CONFIG_ANDROID_RAM_CONSOLE=y
CONFIG_ANDROID_TIMED_GPIO=y
CONFIG_ANDROID_LOW_MEMORY_KILLER=y
CONFIG_ANDROID_INTF_ALARM_DEV=y
CONFIG_EXYNOS_IOMMU=y
CONFIG_EXT2_FS=y
CONFIG_EXT4_FS=y
# CONFIG_EXT4_FS_XATTR is not set
# CONFIG_DNOTIFY is not set
CONFIG_FUSE_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=y
CONFIG_PRINTK_TIME=y
CONFIG_DEBUG_FS=y
CONFIG_DETECT_HUNG_TASK=y
# CONFIG_DEBUG_PREEMPT is not set
CONFIG_DEBUG_INFO=y
CONFIG_SYSCTL_SYSCALL_CHECK=y
CONFIG_KGDB=y
CONFIG_KGDB_KDB=y
# CONFIG_ARM_UNWIND is not set
CONFIG_DEBUG_USER=y
CONFIG_CRC_CCITT=y

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,287 @@
CONFIG_EXPERIMENTAL=y
# CONFIG_SWAP is not set
CONFIG_SYSVIPC=y
CONFIG_CGROUPS=y
CONFIG_CGROUP_DEBUG=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_RESOURCE_COUNTERS=y
CONFIG_CGROUP_SCHED=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_PANIC_TIMEOUT=5
CONFIG_KALLSYMS_ALL=y
# CONFIG_AIO is not set
CONFIG_EMBEDDED=y
CONFIG_PERF_EVENTS=y
CONFIG_DEBUG_PERF_USE_VMALLOC=y
# CONFIG_SLUB_DEBUG is not set
CONFIG_JUMP_LABEL=y
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_PARTITION_ADVANCED=y
CONFIG_EFI_PARTITION=y
CONFIG_ARCH_EXYNOS=y
CONFIG_S3C_LOWLEVEL_UART_PORT=1
CONFIG_S3C_ADC=y
CONFIG_S3C24XX_PWM=y
CONFIG_ARCH_EXYNOS4=y
# CONFIG_CPU_EXYNOS4210 is not set
CONFIG_MACH_SMDK4412=y
CONFIG_ARM_TRUSTZONE=y
CONFIG_ARM_ERRATA_743622=y
CONFIG_ARM_ERRATA_751472=y
CONFIG_ARM_ERRATA_754322=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_SMP=y
CONFIG_PREEMPT=y
CONFIG_AEABI=y
CONFIG_HIGHMEM=y
CONFIG_ARM_FLUSH_CONSOLE_ON_RESTART=y
CONFIG_CMDLINE="console=ttySAC1,115200n8 androidboot.console=ttySAC1"
CONFIG_CMDLINE_EXTEND=y
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
CONFIG_VFP=y
CONFIG_NEON=y
CONFIG_PM_AUTOSLEEP=y
CONFIG_PM_WAKELOCKS=y
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_NET_KEY=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_INET_ESP=y
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_INET_LRO is not set
CONFIG_IPV6=y
CONFIG_IPV6_PRIVACY=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_OPTIMISTIC_DAD=y
CONFIG_INET6_AH=y
CONFIG_INET6_ESP=y
CONFIG_INET6_IPCOMP=y
CONFIG_IPV6_MIP6=y
CONFIG_IPV6_TUNNEL=y
CONFIG_IPV6_MULTIPLE_TABLES=y
CONFIG_NETFILTER=y
CONFIG_NF_CONNTRACK=y
CONFIG_NF_CONNTRACK_EVENTS=y
CONFIG_NF_CT_PROTO_DCCP=y
CONFIG_NF_CT_PROTO_SCTP=y
CONFIG_NF_CT_PROTO_UDPLITE=y
CONFIG_NF_CONNTRACK_AMANDA=y
CONFIG_NF_CONNTRACK_FTP=y
CONFIG_NF_CONNTRACK_H323=y
CONFIG_NF_CONNTRACK_IRC=y
CONFIG_NF_CONNTRACK_NETBIOS_NS=y
CONFIG_NF_CONNTRACK_PPTP=y
CONFIG_NF_CONNTRACK_SANE=y
CONFIG_NF_CONNTRACK_TFTP=y
CONFIG_NF_CT_NETLINK=y
CONFIG_NETFILTER_TPROXY=y
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=y
CONFIG_NETFILTER_XT_TARGET_CONNMARK=y
CONFIG_NETFILTER_XT_TARGET_MARK=y
CONFIG_NETFILTER_XT_TARGET_NFLOG=y
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=y
CONFIG_NETFILTER_XT_TARGET_TPROXY=y
CONFIG_NETFILTER_XT_TARGET_TRACE=y
CONFIG_NETFILTER_XT_MATCH_COMMENT=y
CONFIG_NETFILTER_XT_MATCH_CONNBYTES=y
CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=y
CONFIG_NETFILTER_XT_MATCH_CONNMARK=y
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=y
CONFIG_NETFILTER_XT_MATCH_HELPER=y
CONFIG_NETFILTER_XT_MATCH_IPRANGE=y
CONFIG_NETFILTER_XT_MATCH_LENGTH=y
CONFIG_NETFILTER_XT_MATCH_LIMIT=y
CONFIG_NETFILTER_XT_MATCH_MAC=y
CONFIG_NETFILTER_XT_MATCH_MARK=y
CONFIG_NETFILTER_XT_MATCH_POLICY=y
CONFIG_NETFILTER_XT_MATCH_PKTTYPE=y
CONFIG_NETFILTER_XT_MATCH_QTAGUID=y
CONFIG_NETFILTER_XT_MATCH_QUOTA=y
CONFIG_NETFILTER_XT_MATCH_QUOTA2=y
CONFIG_NETFILTER_XT_MATCH_QUOTA2_LOG=y
CONFIG_NETFILTER_XT_MATCH_SOCKET=y
CONFIG_NETFILTER_XT_MATCH_STATE=y
CONFIG_NETFILTER_XT_MATCH_STATISTIC=y
CONFIG_NETFILTER_XT_MATCH_STRING=y
CONFIG_NETFILTER_XT_MATCH_TIME=y
CONFIG_NETFILTER_XT_MATCH_U32=y
CONFIG_NF_CONNTRACK_IPV4=y
CONFIG_IP_NF_IPTABLES=y
CONFIG_IP_NF_MATCH_AH=y
CONFIG_IP_NF_MATCH_ECN=y
CONFIG_IP_NF_MATCH_TTL=y
CONFIG_IP_NF_FILTER=y
CONFIG_IP_NF_TARGET_REJECT=y
CONFIG_IP_NF_TARGET_REJECT_SKERR=y
CONFIG_NF_NAT=y
CONFIG_IP_NF_TARGET_MASQUERADE=y
CONFIG_IP_NF_TARGET_NETMAP=y
CONFIG_IP_NF_TARGET_REDIRECT=y
CONFIG_IP_NF_MANGLE=y
CONFIG_IP_NF_RAW=y
CONFIG_IP_NF_ARPTABLES=y
CONFIG_IP_NF_ARPFILTER=y
CONFIG_IP_NF_ARP_MANGLE=y
CONFIG_NF_CONNTRACK_IPV6=y
CONFIG_IP6_NF_IPTABLES=y
CONFIG_IP6_NF_FILTER=y
CONFIG_IP6_NF_TARGET_REJECT=y
CONFIG_IP6_NF_TARGET_REJECT_SKERR=y
CONFIG_IP6_NF_MANGLE=y
CONFIG_IP6_NF_RAW=y
CONFIG_PHONET=y
CONFIG_NET_SCHED=y
CONFIG_NET_SCH_HTB=y
CONFIG_NET_SCH_INGRESS=y
CONFIG_NET_CLS_U32=y
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_U32=y
CONFIG_NET_CLS_ACT=y
CONFIG_NET_ACT_POLICE=y
CONFIG_NET_ACT_GACT=y
CONFIG_NET_ACT_MIRRED=y
CONFIG_SYNC=y
CONFIG_SW_SYNC=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_SIZE=16384
CONFIG_UID_STAT=y
CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_SG=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
CONFIG_DM_UEVENT=y
CONFIG_INPUT_EVDEV=y
CONFIG_INPUT_KEYRESET=y
# CONFIG_KEYBOARD_ATKBD is not set
CONFIG_KEYBOARD_GPIO=y
CONFIG_KEYBOARD_SAMSUNG=y
# CONFIG_INPUT_MOUSE is not set
CONFIG_INPUT_TOUCHSCREEN=y
CONFIG_TOUCHSCREEN_PIXCIR=y
CONFIG_INPUT_MISC=y
CONFIG_INPUT_KEYCHORD=y
CONFIG_INPUT_UINPUT=y
CONFIG_INPUT_GPIO=y
# CONFIG_VT is not set
# CONFIG_LEGACY_PTYS is not set
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_SAMSUNG=y
CONFIG_SERIAL_SAMSUNG_CONSOLE=y
CONFIG_HW_RANDOM=y
CONFIG_I2C=y
CONFIG_I2C_CHARDEV=y
CONFIG_I2C_GPIO=y
CONFIG_I2C_S3C2410=y
CONFIG_SPI=y
CONFIG_SPI_GPIO=y
CONFIG_POWER_SUPPLY=y
CONFIG_SENSORS_S3C=y
CONFIG_SENSORS_S3C_RAW=y
CONFIG_WATCHDOG=y
CONFIG_S3C2410_WATCHDOG=y
CONFIG_MFD_MAX8997=y
CONFIG_MFD_MAX77686=y
CONFIG_MFD_S5M_CORE=y
CONFIG_REGULATOR=y
CONFIG_REGULATOR_FIXED_VOLTAGE=y
CONFIG_REGULATOR_MAX77686=y
CONFIG_REGULATOR_MAX8649=y
CONFIG_REGULATOR_MAX8997=y
CONFIG_REGULATOR_S5M8767=y
CONFIG_REGULATOR_WM8994=y
CONFIG_MEDIA_SUPPORT=y
CONFIG_MEDIA_CONTROLLER=y
CONFIG_VIDEO_DEV=y
CONFIG_VIDEO_V4L2_SUBDEV_API=y
CONFIG_VIDEO_M5MOLS=y
CONFIG_V4L_PLATFORM_DRIVERS=y
CONFIG_VIDEO_SAMSUNG_S5P_FIMC=y
CONFIG_VIDEO_S5P_MIPI_CSIS=y
CONFIG_VIDEO_EXYNOS=y
CONFIG_VIDEO_EXYNOS_MFC=y
CONFIG_V4L_MEM2MEM_DRIVERS=y
CONFIG_ION=y
CONFIG_ION_EXYNOS=y
CONFIG_ION_EXYNOS_CONTIGHEAP_SIZE=12288
CONFIG_FB=y
CONFIG_FB_S3C=y
CONFIG_BACKLIGHT_LCD_SUPPORT=y
CONFIG_LCD_CLASS_DEVICE=y
CONFIG_LCD_PLATFORM=y
CONFIG_LCD_LMS501KF03=y
CONFIG_BACKLIGHT_CLASS_DEVICE=y
CONFIG_BACKLIGHT_PWM=y
CONFIG_SOUND=y
CONFIG_SND=y
CONFIG_SND_SOC=y
CONFIG_SND_SOC_SAMSUNG=y
CONFIG_SND_SOC_SAMSUNG_SMDK_WM8994=y
CONFIG_HID_SUPPORT=y
CONFIG_USB_GADGET=y
CONFIG_USB_S3C_OTGD=y
CONFIG_USB_G_ANDROID=y
CONFIG_MMC=y
CONFIG_MMC_UNSAFE_RESUME=y
CONFIG_MMC_CLKGATE=y
CONFIG_MMC_EMBEDDED_SDIO=y
CONFIG_MMC_PARANOID_SD_INIT=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_S3C=y
CONFIG_MMC_SDHCI_S3C_DMA=y
CONFIG_MMC_DW=y
CONFIG_MMC_DW_IDMAC=y
CONFIG_SWITCH=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_S3C=y
CONFIG_STAGING=y
CONFIG_ANDROID=y
CONFIG_ANDROID_BINDER_IPC=y
CONFIG_ASHMEM=y
CONFIG_ANDROID_LOGGER=y
CONFIG_ANDROID_RAM_CONSOLE=y
CONFIG_ANDROID_TIMED_GPIO=y
CONFIG_ANDROID_LOW_MEMORY_KILLER=y
CONFIG_ANDROID_INTF_ALARM_DEV=y
CONFIG_EXYNOS_IOMMU=y
CONFIG_EXT2_FS=y
CONFIG_EXT4_FS=y
# CONFIG_EXT4_FS_XATTR is not set
# CONFIG_DNOTIFY is not set
CONFIG_FUSE_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_CRAMFS=y
CONFIG_ROMFS_FS=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=y
CONFIG_PRINTK_TIME=y
CONFIG_DETECT_HUNG_TASK=y
# CONFIG_DEBUG_PREEMPT is not set
CONFIG_DEBUG_INFO=y
CONFIG_SCHED_TRACER=y
CONFIG_DYNAMIC_DEBUG=y
CONFIG_KGDB=y
CONFIG_KGDB_KDB=y
# CONFIG_ARM_UNWIND is not set
CONFIG_DEBUG_USER=y
CONFIG_CRC_CCITT=y

View File

@@ -0,0 +1,334 @@
CONFIG_EXPERIMENTAL=y
# CONFIG_SWAP is not set
CONFIG_SYSVIPC=y
CONFIG_CGROUPS=y
CONFIG_CGROUP_DEBUG=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_RESOURCE_COUNTERS=y
CONFIG_CGROUP_SCHED=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_PANIC_TIMEOUT=5
CONFIG_KALLSYMS_ALL=y
# CONFIG_AIO is not set
CONFIG_EMBEDDED=y
CONFIG_PERF_EVENTS=y
CONFIG_DEBUG_PERF_USE_VMALLOC=y
CONFIG_JUMP_LABEL=y
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_PARTITION_ADVANCED=y
CONFIG_EFI_PARTITION=y
CONFIG_ARCH_EXYNOS=y
CONFIG_S3C_LOWLEVEL_UART_PORT=1
CONFIG_S3C_ADC=y
CONFIG_S3C24XX_PWM=y
CONFIG_ARCH_EXYNOS4=y
# CONFIG_CPU_EXYNOS4210 is not set
CONFIG_MACH_SMDK4412=y
CONFIG_ARM_TRUSTZONE=y
CONFIG_ARM_ERRATA_743622=y
CONFIG_ARM_ERRATA_751472=y
CONFIG_ARM_ERRATA_754322=y
CONFIG_FIQ_DEBUGGER=y
CONFIG_FIQ_DEBUGGER_NO_SLEEP=y
CONFIG_FIQ_DEBUGGER_CONSOLE=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_SMP=y
CONFIG_PREEMPT=y
CONFIG_AEABI=y
CONFIG_HIGHMEM=y
CONFIG_COMPACTION=y
CONFIG_ARM_FLUSH_CONSOLE_ON_RESTART=y
CONFIG_CMDLINE="vmalloc=512M debug_core.break_on_panic=0 debug_core.break_on_exception=0 no_console_suspend s3c2410-wdt.tmr_atboot=1 s3c2410-wdt.tmr_margin=30"
CONFIG_CMDLINE_EXTEND=y
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_DEFAULT_GOV_INTERACTIVE=y
CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_IDLE=y
CONFIG_VFP=y
CONFIG_NEON=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_BINFMT_MISC=y
CONFIG_PM_AUTOSLEEP=y
CONFIG_PM_WAKELOCKS=y
CONFIG_PM_WAKELOCKS_LIMIT=0
# CONFIG_PM_WAKELOCKS_GC is not set
CONFIG_PM_RUNTIME=y
CONFIG_PM_DEBUG=y
CONFIG_SUSPEND_TIME=y
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_XFRM_USER=y
CONFIG_NET_KEY=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_INET_ESP=y
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_INET_LRO is not set
CONFIG_IPV6=y
CONFIG_IPV6_PRIVACY=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_OPTIMISTIC_DAD=y
CONFIG_INET6_AH=y
CONFIG_INET6_ESP=y
CONFIG_INET6_IPCOMP=y
CONFIG_IPV6_MIP6=y
CONFIG_IPV6_TUNNEL=y
CONFIG_IPV6_MULTIPLE_TABLES=y
CONFIG_NETFILTER=y
CONFIG_NF_CONNTRACK=y
CONFIG_NF_CONNTRACK_EVENTS=y
CONFIG_NF_CT_PROTO_DCCP=y
CONFIG_NF_CT_PROTO_SCTP=y
CONFIG_NF_CT_PROTO_UDPLITE=y
CONFIG_NF_CONNTRACK_AMANDA=y
CONFIG_NF_CONNTRACK_FTP=y
CONFIG_NF_CONNTRACK_H323=y
CONFIG_NF_CONNTRACK_IRC=y
CONFIG_NF_CONNTRACK_NETBIOS_NS=y
CONFIG_NF_CONNTRACK_PPTP=y
CONFIG_NF_CONNTRACK_SANE=y
CONFIG_NF_CONNTRACK_TFTP=y
CONFIG_NF_CT_NETLINK=y
CONFIG_NETFILTER_TPROXY=y
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=y
CONFIG_NETFILTER_XT_TARGET_CONNMARK=y
CONFIG_NETFILTER_XT_TARGET_IDLETIMER=y
CONFIG_NETFILTER_XT_TARGET_MARK=y
CONFIG_NETFILTER_XT_TARGET_NFLOG=y
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=y
CONFIG_NETFILTER_XT_TARGET_TPROXY=y
CONFIG_NETFILTER_XT_TARGET_TRACE=y
CONFIG_NETFILTER_XT_MATCH_COMMENT=y
CONFIG_NETFILTER_XT_MATCH_CONNBYTES=y
CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=y
CONFIG_NETFILTER_XT_MATCH_CONNMARK=y
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=y
CONFIG_NETFILTER_XT_MATCH_HELPER=y
CONFIG_NETFILTER_XT_MATCH_IPRANGE=y
CONFIG_NETFILTER_XT_MATCH_LENGTH=y
CONFIG_NETFILTER_XT_MATCH_LIMIT=y
CONFIG_NETFILTER_XT_MATCH_MAC=y
CONFIG_NETFILTER_XT_MATCH_MARK=y
CONFIG_NETFILTER_XT_MATCH_POLICY=y
CONFIG_NETFILTER_XT_MATCH_PKTTYPE=y
CONFIG_NETFILTER_XT_MATCH_QTAGUID=y
CONFIG_NETFILTER_XT_MATCH_QUOTA=y
CONFIG_NETFILTER_XT_MATCH_QUOTA2=y
CONFIG_NETFILTER_XT_MATCH_QUOTA2_LOG=y
CONFIG_NETFILTER_XT_MATCH_SOCKET=y
CONFIG_NETFILTER_XT_MATCH_STATE=y
CONFIG_NETFILTER_XT_MATCH_STATISTIC=y
CONFIG_NETFILTER_XT_MATCH_STRING=y
CONFIG_NETFILTER_XT_MATCH_TIME=y
CONFIG_NETFILTER_XT_MATCH_U32=y
CONFIG_NF_CONNTRACK_IPV4=y
CONFIG_IP_NF_IPTABLES=y
CONFIG_IP_NF_MATCH_AH=y
CONFIG_IP_NF_MATCH_ECN=y
CONFIG_IP_NF_MATCH_TTL=y
CONFIG_IP_NF_FILTER=y
CONFIG_IP_NF_TARGET_REJECT=y
CONFIG_IP_NF_TARGET_REJECT_SKERR=y
CONFIG_NF_NAT=y
CONFIG_IP_NF_TARGET_MASQUERADE=y
CONFIG_IP_NF_TARGET_NETMAP=y
CONFIG_IP_NF_TARGET_REDIRECT=y
CONFIG_IP_NF_MANGLE=y
CONFIG_IP_NF_RAW=y
CONFIG_IP_NF_ARPTABLES=y
CONFIG_IP_NF_ARPFILTER=y
CONFIG_IP_NF_ARP_MANGLE=y
CONFIG_NF_CONNTRACK_IPV6=y
CONFIG_IP6_NF_IPTABLES=y
CONFIG_IP6_NF_FILTER=y
CONFIG_IP6_NF_TARGET_REJECT=y
CONFIG_IP6_NF_TARGET_REJECT_SKERR=y
CONFIG_IP6_NF_MANGLE=y
CONFIG_IP6_NF_RAW=y
CONFIG_PHONET=y
CONFIG_NET_SCHED=y
CONFIG_NET_SCH_HTB=y
CONFIG_NET_SCH_INGRESS=y
CONFIG_NET_CLS_U32=y
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_U32=y
CONFIG_NET_CLS_ACT=y
CONFIG_NET_ACT_POLICE=y
CONFIG_NET_ACT_GACT=y
CONFIG_NET_ACT_MIRRED=y
CONFIG_SYNC=y
CONFIG_SW_SYNC=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_SIZE=8192
CONFIG_UID_STAT=y
CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_SG=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
CONFIG_DM_UEVENT=y
CONFIG_INPUT_EVDEV=y
CONFIG_INPUT_KEYRESET=y
# CONFIG_KEYBOARD_ATKBD is not set
CONFIG_KEYBOARD_GPIO=y
CONFIG_KEYBOARD_SAMSUNG=y
# CONFIG_INPUT_MOUSE is not set
CONFIG_INPUT_TOUCHSCREEN=y
CONFIG_TOUCHSCREEN_PIXCIR=y
CONFIG_INPUT_MISC=y
CONFIG_INPUT_KEYCHORD=y
CONFIG_INPUT_UINPUT=y
CONFIG_INPUT_GPIO=y
# CONFIG_VT is not set
# CONFIG_LEGACY_PTYS is not set
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_SAMSUNG=y
CONFIG_SERIAL_SAMSUNG_CONSOLE=y
CONFIG_HW_RANDOM=y
CONFIG_I2C=y
CONFIG_I2C_CHARDEV=y
CONFIG_I2C_GPIO=y
CONFIG_I2C_S3C2410=y
CONFIG_SPI=y
CONFIG_SPI_GPIO=y
CONFIG_POWER_SUPPLY=y
CONFIG_BATTERY_SAMSUNG=y
CONFIG_SENSORS_S3C=y
CONFIG_SENSORS_S3C_RAW=y
CONFIG_WATCHDOG=y
CONFIG_S3C2410_WATCHDOG=y
CONFIG_MFD_MAX8997=y
CONFIG_MFD_MAX77686=y
CONFIG_MFD_S5M_CORE=y
CONFIG_REGULATOR=y
CONFIG_REGULATOR_FIXED_VOLTAGE=y
CONFIG_REGULATOR_MAX77686=y
CONFIG_REGULATOR_MAX8649=y
CONFIG_REGULATOR_MAX8997=y
CONFIG_REGULATOR_WM8994=y
CONFIG_MEDIA_SUPPORT=y
CONFIG_MEDIA_CONTROLLER=y
CONFIG_VIDEO_DEV=y
CONFIG_VIDEO_V4L2_SUBDEV_API=y
# CONFIG_RC_CORE is not set
# CONFIG_MEDIA_TUNER_SIMPLE is not set
# CONFIG_MEDIA_TUNER_TDA8290 is not set
# CONFIG_MEDIA_TUNER_TDA827X is not set
# CONFIG_MEDIA_TUNER_TDA18271 is not set
# CONFIG_MEDIA_TUNER_TDA9887 is not set
# CONFIG_MEDIA_TUNER_TEA5761 is not set
# CONFIG_MEDIA_TUNER_TEA5767 is not set
# CONFIG_MEDIA_TUNER_MT20XX is not set
# CONFIG_MEDIA_TUNER_MT2060 is not set
# CONFIG_MEDIA_TUNER_MT2063 is not set
# CONFIG_MEDIA_TUNER_MT2266 is not set
# CONFIG_MEDIA_TUNER_MT2131 is not set
# CONFIG_MEDIA_TUNER_QT1010 is not set
# CONFIG_MEDIA_TUNER_XC2028 is not set
# CONFIG_MEDIA_TUNER_XC5000 is not set
# CONFIG_MEDIA_TUNER_XC4000 is not set
# CONFIG_MEDIA_TUNER_MXL5005S is not set
# CONFIG_MEDIA_TUNER_MXL5007T is not set
# CONFIG_MEDIA_TUNER_MC44S803 is not set
# CONFIG_MEDIA_TUNER_MAX2165 is not set
# CONFIG_MEDIA_TUNER_TDA18218 is not set
# CONFIG_MEDIA_TUNER_TDA18212 is not set
CONFIG_VIDEO_M5MOLS=y
CONFIG_V4L_PLATFORM_DRIVERS=y
CONFIG_VIDEO_SAMSUNG_S5P_FIMC=y
CONFIG_VIDEO_S5P_MIPI_CSIS=y
CONFIG_VIDEO_EXYNOS=y
CONFIG_VIDEO_EXYNOS_MFC=y
CONFIG_V4L_MEM2MEM_DRIVERS=y
CONFIG_ION=y
CONFIG_ION_EXYNOS=y
CONFIG_ION_EXYNOS_CONTIGHEAP_SIZE=12288
CONFIG_MALI400=y
CONFIG_FB=y
CONFIG_FB_S3C=y
CONFIG_BACKLIGHT_LCD_SUPPORT=y
CONFIG_LCD_CLASS_DEVICE=y
CONFIG_LCD_PLATFORM=y
CONFIG_LCD_LMS501KF03=y
CONFIG_BACKLIGHT_CLASS_DEVICE=y
CONFIG_BACKLIGHT_PWM=y
CONFIG_SOUND=y
CONFIG_SND=y
# CONFIG_SND_DRIVERS is not set
# CONFIG_SND_ARM is not set
CONFIG_SND_SOC=y
CONFIG_SND_SOC_SAMSUNG=y
CONFIG_SND_SOC_SAMSUNG_SMDK_WM8994=y
CONFIG_USB_GADGET=y
CONFIG_USB_S3C_OTGD=y
CONFIG_USB_G_ANDROID=y
CONFIG_MMC=y
CONFIG_MMC_UNSAFE_RESUME=y
CONFIG_MMC_CLKGATE=y
CONFIG_MMC_EMBEDDED_SDIO=y
CONFIG_MMC_PARANOID_SD_INIT=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_S3C=y
CONFIG_MMC_SDHCI_S3C_DMA=y
CONFIG_MMC_DW=y
CONFIG_MMC_DW_IDMAC=y
CONFIG_SWITCH=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_S3C=y
CONFIG_STAGING=y
CONFIG_ANDROID=y
CONFIG_ANDROID_BINDER_IPC=y
CONFIG_ASHMEM=y
CONFIG_ANDROID_LOGGER=y
CONFIG_ANDROID_RAM_CONSOLE=y
CONFIG_ANDROID_TIMED_GPIO=y
CONFIG_ANDROID_LOW_MEMORY_KILLER=y
CONFIG_ANDROID_INTF_ALARM_DEV=y
CONFIG_EXYNOS_IOMMU=y
CONFIG_EXT2_FS=y
CONFIG_EXT4_FS=y
# CONFIG_EXT4_FS_XATTR is not set
# CONFIG_DNOTIFY is not set
CONFIG_FUSE_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=y
CONFIG_PRINTK_TIME=y
CONFIG_LOCKUP_DETECTOR=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=10
CONFIG_SCHEDSTATS=y
# CONFIG_DEBUG_PREEMPT is not set
CONFIG_DEBUG_ATOMIC_SLEEP=y
CONFIG_DEBUG_INFO=y
CONFIG_SCHED_TRACER=y
CONFIG_DYNAMIC_DEBUG=y
CONFIG_KGDB=y
CONFIG_KGDB_KDB=y
# CONFIG_ARM_UNWIND is not set
CONFIG_DEBUG_USER=y
CONFIG_DEBUG_RODATA=y
CONFIG_DEBUG_RODATA_TEST=y
CONFIG_CRYPTO_SHA256=y
CONFIG_CRYPTO_TWOFISH=y
CONFIG_CRC_CCITT=y

View File

@@ -0,0 +1,319 @@
CONFIG_EXPERIMENTAL=y
# CONFIG_SWAP is not set
CONFIG_SYSVIPC=y
CONFIG_CGROUPS=y
CONFIG_CGROUP_DEBUG=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_RESOURCE_COUNTERS=y
CONFIG_CGROUP_SCHED=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_PANIC_TIMEOUT=5
CONFIG_KALLSYMS_ALL=y
# CONFIG_AIO is not set
CONFIG_EMBEDDED=y
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_PARTITION_ADVANCED=y
CONFIG_EFI_PARTITION=y
CONFIG_ARCH_EXYNOS=y
CONFIG_S3C_LOWLEVEL_UART_PORT=2
CONFIG_S3C_ADC=y
CONFIG_S3C24XX_PWM=y
# CONFIG_SOC_EXYNOS5250 is not set
CONFIG_EXYNOS_CONTENT_PATH_PROTECTION=y
CONFIG_MACH_SMDK5410=y
CONFIG_EXYNOS_EMMC_HS200=y
CONFIG_ARM_TRUSTZONE=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_SMP=y
CONFIG_BL_SWITCHER=y
CONFIG_BL_SWITCHER_DUMMY_IF=y
CONFIG_PREEMPT=y
CONFIG_AEABI=y
CONFIG_ARCH_SKIP_SECONDARY_CALIBRATE=y
CONFIG_HIGHMEM=y
CONFIG_CMDLINE="console=ttySAC2,115200n8 vmalloc=512M androidboot.console=ttySAC2"
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_INTERACTIVE=y
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
CONFIG_ARM_EXYNOS_IKS_CPUFREQ=y
CONFIG_ARM_EXYNOS_IKS_CLUSTER=y
CONFIG_CPU_IDLE=y
CONFIG_VFP=y
CONFIG_NEON=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_BINFMT_MISC=y
CONFIG_PM_AUTOSLEEP=y
CONFIG_PM_WAKELOCKS=y
CONFIG_PM_RUNTIME=y
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_NET_KEY=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_INET_ESP=y
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_INET_LRO is not set
CONFIG_IPV6=y
CONFIG_IPV6_PRIVACY=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_OPTIMISTIC_DAD=y
CONFIG_INET6_AH=y
CONFIG_INET6_ESP=y
CONFIG_INET6_IPCOMP=y
CONFIG_IPV6_MIP6=y
CONFIG_IPV6_TUNNEL=y
CONFIG_IPV6_MULTIPLE_TABLES=y
CONFIG_NETFILTER=y
CONFIG_NF_CONNTRACK=y
CONFIG_NF_CONNTRACK_EVENTS=y
CONFIG_NF_CT_PROTO_DCCP=y
CONFIG_NF_CT_PROTO_SCTP=y
CONFIG_NF_CT_PROTO_UDPLITE=y
CONFIG_NF_CONNTRACK_AMANDA=y
CONFIG_NF_CONNTRACK_FTP=y
CONFIG_NF_CONNTRACK_H323=y
CONFIG_NF_CONNTRACK_IRC=y
CONFIG_NF_CONNTRACK_NETBIOS_NS=y
CONFIG_NF_CONNTRACK_PPTP=y
CONFIG_NF_CONNTRACK_SANE=y
CONFIG_NF_CONNTRACK_TFTP=y
CONFIG_NF_CT_NETLINK=y
CONFIG_NETFILTER_TPROXY=y
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=y
CONFIG_NETFILTER_XT_TARGET_CONNMARK=y
CONFIG_NETFILTER_XT_TARGET_MARK=y
CONFIG_NETFILTER_XT_TARGET_NFLOG=y
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=y
CONFIG_NETFILTER_XT_TARGET_TPROXY=y
CONFIG_NETFILTER_XT_TARGET_TRACE=y
CONFIG_NETFILTER_XT_MATCH_COMMENT=y
CONFIG_NETFILTER_XT_MATCH_CONNBYTES=y
CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=y
CONFIG_NETFILTER_XT_MATCH_CONNMARK=y
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=y
CONFIG_NETFILTER_XT_MATCH_HELPER=y
CONFIG_NETFILTER_XT_MATCH_IPRANGE=y
CONFIG_NETFILTER_XT_MATCH_LENGTH=y
CONFIG_NETFILTER_XT_MATCH_LIMIT=y
CONFIG_NETFILTER_XT_MATCH_MAC=y
CONFIG_NETFILTER_XT_MATCH_MARK=y
CONFIG_NETFILTER_XT_MATCH_POLICY=y
CONFIG_NETFILTER_XT_MATCH_PKTTYPE=y
CONFIG_NETFILTER_XT_MATCH_QTAGUID=y
CONFIG_NETFILTER_XT_MATCH_QUOTA=y
CONFIG_NETFILTER_XT_MATCH_QUOTA2=y
CONFIG_NETFILTER_XT_MATCH_QUOTA2_LOG=y
CONFIG_NETFILTER_XT_MATCH_SOCKET=y
CONFIG_NETFILTER_XT_MATCH_STATE=y
CONFIG_NETFILTER_XT_MATCH_STATISTIC=y
CONFIG_NETFILTER_XT_MATCH_STRING=y
CONFIG_NETFILTER_XT_MATCH_TIME=y
CONFIG_NETFILTER_XT_MATCH_U32=y
CONFIG_NF_CONNTRACK_IPV4=y
CONFIG_IP_NF_IPTABLES=y
CONFIG_IP_NF_MATCH_AH=y
CONFIG_IP_NF_MATCH_ECN=y
CONFIG_IP_NF_MATCH_TTL=y
CONFIG_IP_NF_FILTER=y
CONFIG_IP_NF_TARGET_REJECT=y
CONFIG_IP_NF_TARGET_REJECT_SKERR=y
CONFIG_NF_NAT=y
CONFIG_IP_NF_TARGET_MASQUERADE=y
CONFIG_IP_NF_TARGET_NETMAP=y
CONFIG_IP_NF_TARGET_REDIRECT=y
CONFIG_IP_NF_MANGLE=y
CONFIG_IP_NF_RAW=y
CONFIG_IP_NF_ARPTABLES=y
CONFIG_IP_NF_ARPFILTER=y
CONFIG_IP_NF_ARP_MANGLE=y
CONFIG_NF_CONNTRACK_IPV6=y
CONFIG_IP6_NF_IPTABLES=y
CONFIG_IP6_NF_FILTER=y
CONFIG_IP6_NF_TARGET_REJECT=y
CONFIG_IP6_NF_TARGET_REJECT_SKERR=y
CONFIG_IP6_NF_MANGLE=y
CONFIG_IP6_NF_RAW=y
CONFIG_PHONET=y
CONFIG_NET_SCHED=y
CONFIG_NET_SCH_HTB=y
CONFIG_NET_SCH_INGRESS=y
CONFIG_NET_CLS_U32=y
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_U32=y
CONFIG_NET_CLS_ACT=y
CONFIG_NET_ACT_POLICE=y
CONFIG_NET_ACT_GACT=y
CONFIG_NET_ACT_MIRRED=y
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
CONFIG_SYNC=y
CONFIG_SW_SYNC=y
CONFIG_SW_SYNC_USER=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=60
CONFIG_BLK_DEV_RAM_SIZE=30720
CONFIG_UID_STAT=y
CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_SG=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
CONFIG_DM_UEVENT=y
CONFIG_INPUT_EVDEV=y
CONFIG_INPUT_KEYRESET=y
CONFIG_KEYBOARD_GPIO=y
# CONFIG_INPUT_MOUSE is not set
CONFIG_INPUT_TOUCHSCREEN=y
CONFIG_TOUCHSCREEN_MXT540E=y
CONFIG_INPUT_MISC=y
CONFIG_INPUT_KEYCHORD=y
CONFIG_INPUT_UINPUT=y
CONFIG_INPUT_GPIO=y
# CONFIG_VT is not set
# CONFIG_LEGACY_PTYS is not set
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_SAMSUNG=y
CONFIG_SERIAL_SAMSUNG_CONSOLE=y
CONFIG_HW_RANDOM=y
CONFIG_I2C=y
CONFIG_I2C_CHARDEV=y
CONFIG_I2C_S3C2410=y
CONFIG_I2C_EXYNOS5=y
CONFIG_SPI=y
CONFIG_SPI_S3C64XX=y
CONFIG_POWER_SUPPLY=y
CONFIG_BATTERY_SAMSUNG=y
# CONFIG_HWMON is not set
CONFIG_THERMAL=y
CONFIG_CPU_THERMAL=y
CONFIG_EXYNOS_THERMAL=y
CONFIG_WATCHDOG=y
CONFIG_S3C2410_WATCHDOG=y
CONFIG_MFD_SEC_CORE=y
CONFIG_REGULATOR=y
CONFIG_REGULATOR_FIXED_VOLTAGE=y
CONFIG_REGULATOR_S2MPS11=y
CONFIG_REGULATOR_WM8994=y
CONFIG_MEDIA_SUPPORT=y
CONFIG_VIDEO_DEV=y
CONFIG_VIDEO_S5K6B2=y
CONFIG_VISION_MODE=y
CONFIG_VIDEO_EXYNOS=y
CONFIG_VIDEO_EXYNOS_FIMC_LITE=y
CONFIG_VIDEO_EXYNOS_MIPI_CSIS=y
CONFIG_VIDEO_EXYNOS_GSCALER=y
CONFIG_VIDEO_EXYNOS_SCALER=y
CONFIG_VIDEO_EXYNOS_JPEG=y
CONFIG_VIDEO_EXYNOS_JPEG_HX=y
CONFIG_VIDEO_EXYNOS_FIMG2D=y
CONFIG_VIDEO_EXYNOS_MFC=y
CONFIG_VIDEO_EXYNOS_TV=y
CONFIG_VIDEO_EXYNOS_HDMI_CEC=y
CONFIG_VIDEO_EXYNOS5_FIMC_IS=y
CONFIG_VIDEO_EXYNOS5_FIMC_IS_SENSOR=y
CONFIG_V4L_MEM2MEM_DRIVERS=y
CONFIG_ION=y
CONFIG_ION_EXYNOS=y
CONFIG_ION_EXYNOS_CONTIGHEAP_SIZE=32768
CONFIG_ION_EXYNOS_DRM_MEMSIZE_FIMD_VIDEO=49152
CONFIG_FB=y
CONFIG_FB_S3C=y
CONFIG_FB_MIPI_DSIM=y
CONFIG_BACKLIGHT_LCD_SUPPORT=y
CONFIG_LCD_CLASS_DEVICE=y
CONFIG_LCD_MIPI_S6E8AA0=y
CONFIG_BACKLIGHT_CLASS_DEVICE=y
CONFIG_SOUND=y
CONFIG_SND=y
CONFIG_SND_SOC=y
CONFIG_SND_SOC_SAMSUNG=y
CONFIG_SND_SOC_SAMSUNG_SMDK_WM8994=y
CONFIG_USB=y
CONFIG_USB_SUSPEND=y
CONFIG_USB_EXYNOS_DRD=y
CONFIG_USB_XHCI_HCD=y
CONFIG_USB_XHCI_EXYNOS=y
CONFIG_USB_EHCI_HCD=y
CONFIG_USB_EHCI_S5P=y
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_OHCI_EXYNOS=y
CONFIG_USB_STORAGE=y
CONFIG_USB_EXYNOS_SWITCH=y
CONFIG_USB_GADGET=y
CONFIG_USB_EXYNOS_SS_UDC=y
CONFIG_USB_EXYNOS_SS_UDC_SSMODE=y
CONFIG_USB_G_ANDROID=y
CONFIG_MMC=y
CONFIG_MMC_UNSAFE_RESUME=y
CONFIG_MMC_CLKGATE=y
CONFIG_MMC_EMBEDDED_SDIO=y
CONFIG_MMC_PARANOID_SD_INIT=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_S3C=y
CONFIG_MMC_SDHCI_S3C_DMA=y
CONFIG_MMC_DW=y
CONFIG_MMC_DW_IDMAC=y
CONFIG_SWITCH=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_SEC=y
CONFIG_STAGING=y
CONFIG_ANDROID=y
CONFIG_ANDROID_BINDER_IPC=y
CONFIG_ASHMEM=y
CONFIG_ANDROID_LOGGER=y
CONFIG_ANDROID_RAM_CONSOLE=y
CONFIG_ANDROID_TIMED_GPIO=y
CONFIG_ANDROID_LOW_MEMORY_KILLER=y
CONFIG_ANDROID_INTF_ALARM_DEV=y
CONFIG_EXYNOS_IOMMU=y
CONFIG_PM_DEVFREQ=y
CONFIG_ARM_EXYNOS5410_BUS_DEVFREQ=y
CONFIG_EXT2_FS=y
CONFIG_EXT4_FS=y
# CONFIG_EXT4_FS_XATTR is not set
# CONFIG_DNOTIFY is not set
CONFIG_FUSE_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_CRAMFS=y
CONFIG_ROMFS_FS=y
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=y
CONFIG_PRINTK_TIME=y
CONFIG_LOCKUP_DETECTOR=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=10
CONFIG_SCHEDSTATS=y
# CONFIG_DEBUG_PREEMPT is not set
CONFIG_DEBUG_ATOMIC_SLEEP=y
CONFIG_DEBUG_INFO=y
CONFIG_SCHED_TRACER=y
CONFIG_DYNAMIC_DEBUG=y
CONFIG_KGDB=y
CONFIG_KGDB_KDB=y
# CONFIG_ARM_UNWIND is not set
CONFIG_DEBUG_USER=y
CONFIG_DEBUG_RODATA=y
CONFIG_DEBUG_RODATA_TEST=y
CONFIG_CRC_CCITT=y

View File

@@ -0,0 +1,245 @@
CONFIG_EXPERIMENTAL=y
# CONFIG_SWAP is not set
CONFIG_SYSVIPC=y
CONFIG_CGROUPS=y
CONFIG_CGROUP_DEBUG=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_RESOURCE_COUNTERS=y
CONFIG_CGROUP_SCHED=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_PANIC_TIMEOUT=5
CONFIG_KALLSYMS_ALL=y
# CONFIG_AIO is not set
CONFIG_EMBEDDED=y
# CONFIG_SLUB_DEBUG is not set
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_PARTITION_ADVANCED=y
CONFIG_EFI_PARTITION=y
CONFIG_ARCH_EXYNOS=y
CONFIG_S3C_LOWLEVEL_UART_PORT=2
CONFIG_S3C_ADC=y
CONFIG_S3C24XX_PWM=y
# CONFIG_SOC_EXYNOS5250 is not set
CONFIG_MACH_SMDK5410=y
CONFIG_ARM_TRUSTZONE=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_SMP=y
CONFIG_PREEMPT=y
CONFIG_AEABI=y
CONFIG_HIGHMEM=y
CONFIG_CMDLINE="root=/dev/ram0 rw rootfstype=cramfs ramdisk=30720 initrd=0x41000000,30M console=ttySAC2,115200 init=/linuxrc"
CONFIG_VFP=y
CONFIG_NEON=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_BINFMT_MISC=y
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_NET_KEY=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_INET_ESP=y
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_INET_LRO is not set
CONFIG_IPV6=y
CONFIG_IPV6_PRIVACY=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_OPTIMISTIC_DAD=y
CONFIG_INET6_AH=y
CONFIG_INET6_ESP=y
CONFIG_INET6_IPCOMP=y
CONFIG_IPV6_MIP6=y
CONFIG_IPV6_TUNNEL=y
CONFIG_IPV6_MULTIPLE_TABLES=y
CONFIG_NETFILTER=y
CONFIG_NF_CONNTRACK=y
CONFIG_NF_CONNTRACK_EVENTS=y
CONFIG_NF_CT_PROTO_DCCP=y
CONFIG_NF_CT_PROTO_SCTP=y
CONFIG_NF_CT_PROTO_UDPLITE=y
CONFIG_NF_CONNTRACK_AMANDA=y
CONFIG_NF_CONNTRACK_FTP=y
CONFIG_NF_CONNTRACK_H323=y
CONFIG_NF_CONNTRACK_IRC=y
CONFIG_NF_CONNTRACK_NETBIOS_NS=y
CONFIG_NF_CONNTRACK_PPTP=y
CONFIG_NF_CONNTRACK_SANE=y
CONFIG_NF_CONNTRACK_TFTP=y
CONFIG_NF_CT_NETLINK=y
CONFIG_NETFILTER_TPROXY=y
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=y
CONFIG_NETFILTER_XT_TARGET_CONNMARK=y
CONFIG_NETFILTER_XT_TARGET_MARK=y
CONFIG_NETFILTER_XT_TARGET_NFLOG=y
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=y
CONFIG_NETFILTER_XT_TARGET_TPROXY=y
CONFIG_NETFILTER_XT_TARGET_TRACE=y
CONFIG_NETFILTER_XT_MATCH_COMMENT=y
CONFIG_NETFILTER_XT_MATCH_CONNBYTES=y
CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=y
CONFIG_NETFILTER_XT_MATCH_CONNMARK=y
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=y
CONFIG_NETFILTER_XT_MATCH_HELPER=y
CONFIG_NETFILTER_XT_MATCH_IPRANGE=y
CONFIG_NETFILTER_XT_MATCH_LENGTH=y
CONFIG_NETFILTER_XT_MATCH_LIMIT=y
CONFIG_NETFILTER_XT_MATCH_MAC=y
CONFIG_NETFILTER_XT_MATCH_MARK=y
CONFIG_NETFILTER_XT_MATCH_POLICY=y
CONFIG_NETFILTER_XT_MATCH_PKTTYPE=y
CONFIG_NETFILTER_XT_MATCH_QTAGUID=y
CONFIG_NETFILTER_XT_MATCH_QUOTA=y
CONFIG_NETFILTER_XT_MATCH_QUOTA2=y
CONFIG_NETFILTER_XT_MATCH_QUOTA2_LOG=y
CONFIG_NETFILTER_XT_MATCH_SOCKET=y
CONFIG_NETFILTER_XT_MATCH_STATE=y
CONFIG_NETFILTER_XT_MATCH_STATISTIC=y
CONFIG_NETFILTER_XT_MATCH_STRING=y
CONFIG_NETFILTER_XT_MATCH_TIME=y
CONFIG_NETFILTER_XT_MATCH_U32=y
CONFIG_NF_CONNTRACK_IPV4=y
CONFIG_IP_NF_IPTABLES=y
CONFIG_IP_NF_MATCH_AH=y
CONFIG_IP_NF_MATCH_ECN=y
CONFIG_IP_NF_MATCH_TTL=y
CONFIG_IP_NF_FILTER=y
CONFIG_IP_NF_TARGET_REJECT=y
CONFIG_IP_NF_TARGET_REJECT_SKERR=y
CONFIG_NF_NAT=y
CONFIG_IP_NF_TARGET_MASQUERADE=y
CONFIG_IP_NF_TARGET_NETMAP=y
CONFIG_IP_NF_TARGET_REDIRECT=y
CONFIG_IP_NF_MANGLE=y
CONFIG_IP_NF_RAW=y
CONFIG_IP_NF_ARPTABLES=y
CONFIG_IP_NF_ARPFILTER=y
CONFIG_IP_NF_ARP_MANGLE=y
CONFIG_NF_CONNTRACK_IPV6=y
CONFIG_IP6_NF_IPTABLES=y
CONFIG_IP6_NF_FILTER=y
CONFIG_IP6_NF_TARGET_REJECT=y
CONFIG_IP6_NF_TARGET_REJECT_SKERR=y
CONFIG_IP6_NF_MANGLE=y
CONFIG_IP6_NF_RAW=y
CONFIG_PHONET=y
CONFIG_NET_SCHED=y
CONFIG_NET_SCH_HTB=y
CONFIG_NET_SCH_INGRESS=y
CONFIG_NET_CLS_U32=y
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_U32=y
CONFIG_NET_CLS_ACT=y
CONFIG_NET_ACT_POLICE=y
CONFIG_NET_ACT_GACT=y
CONFIG_NET_ACT_MIRRED=y
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=60
CONFIG_BLK_DEV_RAM_SIZE=30720
CONFIG_UID_STAT=y
CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_SG=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
CONFIG_DM_UEVENT=y
CONFIG_INPUT_EVDEV=y
CONFIG_INPUT_KEYRESET=y
# CONFIG_INPUT_MOUSE is not set
CONFIG_INPUT_TOUCHSCREEN=y
CONFIG_TOUCHSCREEN_MXT540E=y
CONFIG_INPUT_MISC=y
CONFIG_INPUT_KEYCHORD=y
CONFIG_INPUT_UINPUT=y
CONFIG_INPUT_GPIO=y
# CONFIG_VT is not set
# CONFIG_LEGACY_PTYS is not set
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_SAMSUNG=y
CONFIG_SERIAL_SAMSUNG_CONSOLE=y
CONFIG_HW_RANDOM=y
CONFIG_I2C=y
CONFIG_I2C_CHARDEV=y
CONFIG_I2C_S3C2410=y
CONFIG_I2C_EXYNOS5=y
CONFIG_POWER_SUPPLY=y
# CONFIG_HWMON is not set
CONFIG_WATCHDOG=y
CONFIG_S3C2410_WATCHDOG=y
CONFIG_MFD_SEC_CORE=y
CONFIG_REGULATOR=y
CONFIG_REGULATOR_FIXED_VOLTAGE=y
CONFIG_REGULATOR_S2MPS11=y
CONFIG_REGULATOR_WM8994=y
CONFIG_ION=y
CONFIG_ION_EXYNOS=y
CONFIG_ION_EXYNOS_CONTIGHEAP_SIZE=32768
CONFIG_SOUND=y
CONFIG_SND=y
CONFIG_SND_SOC=y
CONFIG_SND_SOC_SAMSUNG=y
CONFIG_SND_SOC_SAMSUNG_SMDK_WM8994=y
# CONFIG_HID_SUPPORT is not set
CONFIG_USB_GADGET=y
CONFIG_USB_FUSB300=y
CONFIG_USB_G_ANDROID=y
CONFIG_MMC=y
CONFIG_MMC_UNSAFE_RESUME=y
CONFIG_MMC_CLKGATE=y
CONFIG_MMC_EMBEDDED_SDIO=y
CONFIG_MMC_PARANOID_SD_INIT=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_S3C=y
CONFIG_MMC_SDHCI_S3C_DMA=y
CONFIG_MMC_DW=y
CONFIG_MMC_DW_IDMAC=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_S3C=y
CONFIG_STAGING=y
CONFIG_ANDROID=y
CONFIG_ANDROID_BINDER_IPC=y
CONFIG_ASHMEM=y
CONFIG_ANDROID_LOGGER=y
CONFIG_ANDROID_RAM_CONSOLE=y
CONFIG_ANDROID_TIMED_GPIO=y
CONFIG_ANDROID_LOW_MEMORY_KILLER=y
CONFIG_ANDROID_INTF_ALARM_DEV=y
CONFIG_EXYNOS_IOMMU=y
CONFIG_EXT2_FS=y
CONFIG_EXT4_FS=y
# CONFIG_EXT4_FS_XATTR is not set
# CONFIG_DNOTIFY is not set
CONFIG_FUSE_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_CRAMFS=y
CONFIG_ROMFS_FS=y
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=y
CONFIG_PRINTK_TIME=y
CONFIG_DEBUG_FS=y
CONFIG_DETECT_HUNG_TASK=y
# CONFIG_DEBUG_PREEMPT is not set
CONFIG_DEBUG_INFO=y
CONFIG_KGDB=y
CONFIG_KGDB_KDB=y
# CONFIG_ARM_UNWIND is not set
CONFIG_DEBUG_USER=y
CONFIG_CRC_CCITT=y

View File

@@ -0,0 +1,320 @@
CONFIG_EXPERIMENTAL=y
# CONFIG_SWAP is not set
CONFIG_SYSVIPC=y
CONFIG_CGROUPS=y
CONFIG_CGROUP_DEBUG=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_RESOURCE_COUNTERS=y
CONFIG_CGROUP_SCHED=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_PANIC_TIMEOUT=5
CONFIG_KALLSYMS_ALL=y
# CONFIG_AIO is not set
CONFIG_EMBEDDED=y
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_PARTITION_ADVANCED=y
CONFIG_EFI_PARTITION=y
CONFIG_ARCH_EXYNOS=y
CONFIG_S3C_LOWLEVEL_UART_PORT=2
CONFIG_S3C_ADC=y
CONFIG_S3C24XX_PWM=y
# CONFIG_SOC_EXYNOS5250 is not set
CONFIG_EXYNOS_CONTENT_PATH_PROTECTION=y
CONFIG_MACH_SMDK5410=y
CONFIG_EXYNOS_EMMC_HS200=y
CONFIG_ARM_TRUSTZONE=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_SMP=y
CONFIG_BL_SWITCHER=y
CONFIG_BL_SWITCHER_DUMMY_IF=y
CONFIG_PREEMPT=y
CONFIG_AEABI=y
CONFIG_ARCH_SKIP_SECONDARY_CALIBRATE=y
CONFIG_HIGHMEM=y
CONFIG_CMDLINE="console=ttySAC2,115200n8 vmalloc=512M androidboot.console=ttySAC2"
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_INTERACTIVE=y
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
CONFIG_ARM_EXYNOS_IKS_CPUFREQ=y
CONFIG_ARM_EXYNOS_IKS_CLUSTER=y
CONFIG_CPU_IDLE=y
CONFIG_VFP=y
CONFIG_NEON=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_BINFMT_MISC=y
CONFIG_PM_AUTOSLEEP=y
CONFIG_PM_WAKELOCKS=y
CONFIG_PM_RUNTIME=y
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_NET_KEY=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_INET_ESP=y
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
# CONFIG_INET_LRO is not set
CONFIG_IPV6=y
CONFIG_IPV6_PRIVACY=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_OPTIMISTIC_DAD=y
CONFIG_INET6_AH=y
CONFIG_INET6_ESP=y
CONFIG_INET6_IPCOMP=y
CONFIG_IPV6_MIP6=y
CONFIG_IPV6_TUNNEL=y
CONFIG_IPV6_MULTIPLE_TABLES=y
CONFIG_NETFILTER=y
CONFIG_NF_CONNTRACK=y
CONFIG_NF_CONNTRACK_EVENTS=y
CONFIG_NF_CT_PROTO_DCCP=y
CONFIG_NF_CT_PROTO_SCTP=y
CONFIG_NF_CT_PROTO_UDPLITE=y
CONFIG_NF_CONNTRACK_AMANDA=y
CONFIG_NF_CONNTRACK_FTP=y
CONFIG_NF_CONNTRACK_H323=y
CONFIG_NF_CONNTRACK_IRC=y
CONFIG_NF_CONNTRACK_NETBIOS_NS=y
CONFIG_NF_CONNTRACK_PPTP=y
CONFIG_NF_CONNTRACK_SANE=y
CONFIG_NF_CONNTRACK_TFTP=y
CONFIG_NF_CT_NETLINK=y
CONFIG_NETFILTER_TPROXY=y
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=y
CONFIG_NETFILTER_XT_TARGET_CONNMARK=y
CONFIG_NETFILTER_XT_TARGET_MARK=y
CONFIG_NETFILTER_XT_TARGET_NFLOG=y
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=y
CONFIG_NETFILTER_XT_TARGET_TPROXY=y
CONFIG_NETFILTER_XT_TARGET_TRACE=y
CONFIG_NETFILTER_XT_MATCH_COMMENT=y
CONFIG_NETFILTER_XT_MATCH_CONNBYTES=y
CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=y
CONFIG_NETFILTER_XT_MATCH_CONNMARK=y
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=y
CONFIG_NETFILTER_XT_MATCH_HELPER=y
CONFIG_NETFILTER_XT_MATCH_IPRANGE=y
CONFIG_NETFILTER_XT_MATCH_LENGTH=y
CONFIG_NETFILTER_XT_MATCH_LIMIT=y
CONFIG_NETFILTER_XT_MATCH_MAC=y
CONFIG_NETFILTER_XT_MATCH_MARK=y
CONFIG_NETFILTER_XT_MATCH_POLICY=y
CONFIG_NETFILTER_XT_MATCH_PKTTYPE=y
CONFIG_NETFILTER_XT_MATCH_QTAGUID=y
CONFIG_NETFILTER_XT_MATCH_QUOTA=y
CONFIG_NETFILTER_XT_MATCH_QUOTA2=y
CONFIG_NETFILTER_XT_MATCH_QUOTA2_LOG=y
CONFIG_NETFILTER_XT_MATCH_SOCKET=y
CONFIG_NETFILTER_XT_MATCH_STATE=y
CONFIG_NETFILTER_XT_MATCH_STATISTIC=y
CONFIG_NETFILTER_XT_MATCH_STRING=y
CONFIG_NETFILTER_XT_MATCH_TIME=y
CONFIG_NETFILTER_XT_MATCH_U32=y
CONFIG_NF_CONNTRACK_IPV4=y
CONFIG_IP_NF_IPTABLES=y
CONFIG_IP_NF_MATCH_AH=y
CONFIG_IP_NF_MATCH_ECN=y
CONFIG_IP_NF_MATCH_TTL=y
CONFIG_IP_NF_FILTER=y
CONFIG_IP_NF_TARGET_REJECT=y
CONFIG_IP_NF_TARGET_REJECT_SKERR=y
CONFIG_NF_NAT=y
CONFIG_IP_NF_TARGET_MASQUERADE=y
CONFIG_IP_NF_TARGET_NETMAP=y
CONFIG_IP_NF_TARGET_REDIRECT=y
CONFIG_IP_NF_MANGLE=y
CONFIG_IP_NF_RAW=y
CONFIG_IP_NF_ARPTABLES=y
CONFIG_IP_NF_ARPFILTER=y
CONFIG_IP_NF_ARP_MANGLE=y
CONFIG_NF_CONNTRACK_IPV6=y
CONFIG_IP6_NF_IPTABLES=y
CONFIG_IP6_NF_FILTER=y
CONFIG_IP6_NF_TARGET_REJECT=y
CONFIG_IP6_NF_TARGET_REJECT_SKERR=y
CONFIG_IP6_NF_MANGLE=y
CONFIG_IP6_NF_RAW=y
CONFIG_PHONET=y
CONFIG_NET_SCHED=y
CONFIG_NET_SCH_HTB=y
CONFIG_NET_SCH_INGRESS=y
CONFIG_NET_CLS_U32=y
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_U32=y
CONFIG_NET_CLS_ACT=y
CONFIG_NET_ACT_POLICE=y
CONFIG_NET_ACT_GACT=y
CONFIG_NET_ACT_MIRRED=y
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
CONFIG_SYNC=y
CONFIG_SW_SYNC=y
CONFIG_SW_SYNC_USER=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=60
CONFIG_BLK_DEV_RAM_SIZE=30720
CONFIG_UID_STAT=y
CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_SG=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
CONFIG_DM_UEVENT=y
CONFIG_INPUT_EVDEV=y
CONFIG_INPUT_KEYRESET=y
CONFIG_KEYBOARD_GPIO=y
# CONFIG_INPUT_MOUSE is not set
CONFIG_INPUT_TOUCHSCREEN=y
CONFIG_TOUCHSCREEN_COASIA=y
CONFIG_INPUT_MISC=y
CONFIG_INPUT_KEYCHORD=y
CONFIG_INPUT_UINPUT=y
CONFIG_INPUT_GPIO=y
# CONFIG_VT is not set
# CONFIG_LEGACY_PTYS is not set
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_SAMSUNG=y
CONFIG_SERIAL_SAMSUNG_CONSOLE=y
CONFIG_HW_RANDOM=y
CONFIG_I2C=y
CONFIG_I2C_CHARDEV=y
CONFIG_I2C_S3C2410=y
CONFIG_I2C_EXYNOS5=y
CONFIG_SPI=y
CONFIG_SPI_S3C64XX=y
CONFIG_POWER_SUPPLY=y
CONFIG_BATTERY_SAMSUNG=y
# CONFIG_HWMON is not set
CONFIG_THERMAL=y
CONFIG_CPU_THERMAL=y
CONFIG_EXYNOS_THERMAL=y
CONFIG_WATCHDOG=y
CONFIG_S3C2410_WATCHDOG=y
CONFIG_MFD_SEC_CORE=y
CONFIG_REGULATOR=y
CONFIG_REGULATOR_FIXED_VOLTAGE=y
CONFIG_REGULATOR_S2MPS11=y
CONFIG_REGULATOR_WM8994=y
CONFIG_MEDIA_SUPPORT=y
CONFIG_VIDEO_DEV=y
CONFIG_VIDEO_S5K6B2=y
CONFIG_VISION_MODE=y
CONFIG_VIDEO_EXYNOS=y
CONFIG_VIDEO_EXYNOS_FIMC_LITE=y
CONFIG_VIDEO_EXYNOS_MIPI_CSIS=y
CONFIG_VIDEO_EXYNOS_GSCALER=y
CONFIG_VIDEO_EXYNOS_SCALER=y
CONFIG_VIDEO_EXYNOS_JPEG=y
CONFIG_VIDEO_EXYNOS_JPEG_HX=y
CONFIG_VIDEO_EXYNOS_FIMG2D=y
CONFIG_VIDEO_EXYNOS_MFC=y
CONFIG_VIDEO_EXYNOS_TV=y
CONFIG_VIDEO_EXYNOS_HDMI_CEC=y
CONFIG_VIDEO_EXYNOS5_FIMC_IS=y
CONFIG_VIDEO_EXYNOS5_FIMC_IS_SENSOR=y
CONFIG_V4L_MEM2MEM_DRIVERS=y
CONFIG_ION=y
CONFIG_ION_EXYNOS=y
CONFIG_ION_EXYNOS_CONTIGHEAP_SIZE=128800
CONFIG_ION_EXYNOS_DRM_MEMSIZE_FIMD_VIDEO=49152
CONFIG_FB=y
CONFIG_FB_S3C=y
CONFIG_FB_EXYNOS_FIMD_SYSMMU_DISABLE=y
CONFIG_S5P_DP=y
CONFIG_BACKLIGHT_LCD_SUPPORT=y
CONFIG_LCD_CLASS_DEVICE=y
CONFIG_LCD_PLATFORM=y
CONFIG_BACKLIGHT_CLASS_DEVICE=y
CONFIG_SOUND=y
CONFIG_SND=y
CONFIG_SND_SOC=y
CONFIG_SND_SOC_SAMSUNG=y
CONFIG_SND_SOC_SAMSUNG_SMDK_WM8994=y
CONFIG_USB=y
CONFIG_USB_SUSPEND=y
CONFIG_USB_EXYNOS_DRD=y
CONFIG_USB_XHCI_HCD=y
CONFIG_USB_XHCI_EXYNOS=y
CONFIG_USB_EHCI_HCD=y
CONFIG_USB_EHCI_S5P=y
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_OHCI_EXYNOS=y
CONFIG_USB_STORAGE=y
CONFIG_USB_EXYNOS_SWITCH=y
CONFIG_USB_GADGET=y
CONFIG_USB_EXYNOS_SS_UDC=y
CONFIG_USB_EXYNOS_SS_UDC_SSMODE=y
CONFIG_USB_G_ANDROID=y
CONFIG_MMC=y
CONFIG_MMC_UNSAFE_RESUME=y
CONFIG_MMC_CLKGATE=y
CONFIG_MMC_EMBEDDED_SDIO=y
CONFIG_MMC_PARANOID_SD_INIT=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_S3C=y
CONFIG_MMC_SDHCI_S3C_DMA=y
CONFIG_MMC_DW=y
CONFIG_MMC_DW_IDMAC=y
CONFIG_SWITCH=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_SEC=y
CONFIG_STAGING=y
CONFIG_ANDROID=y
CONFIG_ANDROID_BINDER_IPC=y
CONFIG_ASHMEM=y
CONFIG_ANDROID_LOGGER=y
CONFIG_ANDROID_RAM_CONSOLE=y
CONFIG_ANDROID_TIMED_GPIO=y
CONFIG_ANDROID_LOW_MEMORY_KILLER=y
CONFIG_ANDROID_INTF_ALARM_DEV=y
CONFIG_EXYNOS_IOMMU=y
CONFIG_PM_DEVFREQ=y
CONFIG_ARM_EXYNOS5410_BUS_DEVFREQ=y
CONFIG_EXT2_FS=y
CONFIG_EXT4_FS=y
# CONFIG_EXT4_FS_XATTR is not set
# CONFIG_DNOTIFY is not set
CONFIG_FUSE_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_CRAMFS=y
CONFIG_ROMFS_FS=y
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=y
CONFIG_PRINTK_TIME=y
CONFIG_LOCKUP_DETECTOR=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=10
CONFIG_SCHEDSTATS=y
# CONFIG_DEBUG_PREEMPT is not set
CONFIG_DEBUG_ATOMIC_SLEEP=y
CONFIG_DEBUG_INFO=y
CONFIG_SCHED_TRACER=y
CONFIG_DYNAMIC_DEBUG=y
CONFIG_KGDB=y
CONFIG_KGDB_KDB=y
# CONFIG_ARM_UNWIND is not set
CONFIG_DEBUG_USER=y
CONFIG_DEBUG_RODATA=y
CONFIG_DEBUG_RODATA_TEST=y
CONFIG_CRC_CCITT=y

View File

@@ -0,0 +1,109 @@
/*
* arch/arm/include/asm/bL_switcher.h
*
* Created by: Nicolas Pitre, April 2012
* Copyright: (C) 2012 Linaro Limited
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef BL_ENTRY_H
#define BL_ENTRY_H
#define BL_CPUS_PER_CLUSTER 4
#define BL_NR_CLUSTERS 2
/* Definitions for bL_cluster_sync_struct */
#define CPU_DOWN 0
#define CPU_COMING_UP 1
#define CPU_UP 2
#define CPU_GOING_DOWN 3
#define CLUSTER_DOWN 0
#define CLUSTER_UP 1
#define CLUSTER_GOING_DOWN 2
#define INBOUND_NOT_COMING_UP 0
#define INBOUND_COMING_UP 1
#define BL_VLOCK_STRUCT_SIZE 8
#ifndef __ASSEMBLY__
#include <linux/mm.h>
#include <linux/types.h>
/* Synchronisation structures for coordinating safe cluster setup/teardown: */
struct bL_cluster_sync_struct {
s8 cpus[BL_CPUS_PER_CLUSTER]; /* individual CPU states */
s8 cluster; /* cluster state */
s8 inbound; /* inbound-side state */
s8 first_man; /* CPU index of elected first man */
};
struct bL_sync_struct {
struct bL_cluster_sync_struct clusters[BL_NR_CLUSTERS];
};
/* How much physical memory to reserve for the synchronisation structure: */
#define BL_SYNC_MEM_RESERVE PAGE_ALIGN(sizeof(struct bL_cluster_sync_struct))
extern unsigned long bL_sync_phys; /* physical address of *bL_sync */
struct bL_vlock_struct {
unsigned int voting_owner;
unsigned char voting_offset[BL_CPUS_PER_CLUSTER];
};
struct bL_firstman_vlock_struct {
struct bL_vlock_struct clusters[BL_NR_CLUSTERS];
};
extern unsigned long bL_vlock_phys;
#define BL_VLOCK_MEM_RESERVE PAGE_ALIGN(sizeof(struct bL_vlock_struct))
void __bL_cpu_going_down(unsigned int cpu, unsigned int cluster);
void __bL_cpu_down(unsigned int cpu, unsigned int cluster);
void __bL_outbound_leave_critical(unsigned int cluster, int state);
bool __bL_outbound_enter_critical(unsigned int this_cpu, unsigned int cluster);
bool __bL_cluster_state(unsigned int cluster);
int __init bL_cluster_sync_reserve(void);
unsigned int bL_running_cluster_num_cpus(unsigned int cpu);
void bL_update_cluster_state(unsigned int value, unsigned int cluster);
void bL_update_cpu_state(unsigned int value, unsigned int cpu,
unsigned int cluster);
/*
* CPU/cluster power operations for higher subsystems to use.
* This is the "public" API whereas the above is meant to be used
* only in the implementation of this API.
*/
struct bL_power_ops {
void (*power_up)(unsigned int cpu, unsigned int cluster);
void (*power_down)(unsigned int cpu, unsigned int cluster);
void (*power_up_setup)(void);
void (*inbound_setup)(unsigned int cpu, unsigned int cluster);
};
int __init bL_cluster_sync_init(const struct bL_power_ops *ops);
/*
* Platform specific code should use this symbol to set up seconary
* entry location for processors to use when released from reset.
*/
extern void bl_entry_point(void);
/*
* This is used to indicate where the given CPU from given cluster should
* branch once it is ready to re-enter the kernel using ptr, or NULL if it
* should be gated. A gated CPU is held in a WFE loop until its vector
* becomes non NULL.
*/
void bL_set_entry_vector(unsigned cpu, unsigned cluster, void *ptr);
#endif /* ! __ASSEMBLY__ */
#endif

View File

@@ -0,0 +1,29 @@
/*
* arch/arm/include/asm/bL_switcher.h
*
* Created by: Nicolas Pitre, April 2012
* Copyright: (C) 2012 Linaro Limited
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef ASM_BL_SWITCHER_H
#define ASM_BL_SWITCHER_H
enum switch_event {
SWITCH_ENTER,
SWITCH_EXIT,
};
struct bL_power_ops;
int __init bL_switcher_init(const struct bL_power_ops *ops);
void bL_switch_request(unsigned int cpu, unsigned int new_cluster_id);
int bL_cluster_switch_request(unsigned int new_cluster);
int register_bL_swicher_notifier(struct notifier_block *nb);
int unregister_bL_swicher_notifier(struct notifier_block *nb);
bool bL_check_auto_switcher_enable(void);
#endif

View File

@@ -0,0 +1,43 @@
/*
* vlock.h - simple voting lock implementation
*
* Created by: Dave Martin, 2012-08-16
* Copyright: (C) 2012 Linaro Limited
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License along
* with this program; if not, write to the Free Software Foundation, Inc.,
* 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
#ifndef __VLOCK_H
#define __VLOCK_H
#include <asm/bL_entry.h>
#define VLOCK_OWNER_OFFSET 0
#define VLOCK_VOTING_OFFSET 4
#define VLOCK_VOTING_SIZE ((BL_CPUS_PER_CLUSTER + 3) / 4 * 4)
#define VLOCK_SIZE (VLOCK_VOTING_OFFSET + VLOCK_VOTING_SIZE)
#define VLOCK_OWNER_NONE 0
#ifndef __ASSEMBLY__
struct vlock {
char data[VLOCK_SIZE];
};
int vlock_trylock(struct vlock *lock, unsigned int owner);
void vlock_unlock(struct vlock *lock);
#endif /* __ASSEMBLY__ */
#endif /* ! __VLOCK_H */

View File

@@ -16,6 +16,7 @@
#include <asm/shmparam.h>
#include <asm/cachetype.h>
#include <asm/outercache.h>
#include <asm/rodata.h>
#define CACHE_COLOUR(vaddr) ((vaddr & (SHMLBA - 1)) >> PAGE_SHIFT)
@@ -49,6 +50,10 @@
*
* Unconditionally clean and invalidate the entire cache.
*
* flush_kern_dcache_level(level)
*
* Flush data cache levels up to the level input parameter.
*
* flush_user_all()
*
* Clean and invalidate all user space cache entries
@@ -97,6 +102,7 @@
struct cpu_cache_fns {
void (*flush_icache_all)(void);
void (*flush_kern_all)(void);
void (*flush_kern_dcache_level)(int);
void (*flush_user_all)(void);
void (*flush_user_range)(unsigned long, unsigned long, unsigned int);
@@ -199,6 +205,39 @@ extern void copy_to_user_page(struct vm_area_struct *, struct page *,
#define __flush_icache_preferred __flush_icache_all_generic
#endif
#if __LINUX_ARM_ARCH__ >= 7
/*
* Hotplug and CPU idle code requires to flush only cache levels
* impacted by power down operations. In v7 the upper level is
* retrieved by reading LoUIS field of CLIDR, since inner shareability
* represents the cache boundaries affected by per-CPU shutdown
* operations in the most common platforms.
*/
#define __cache_level_v7_uis ({ \
u32 val; \
asm volatile("mrc p15, 1, %0, c0, c0, 1" : "=r"(val)); \
((val & 0xe00000) >> 21); })
#define flush_cache_level_preferred() __cache_level_v7_uis
#else
#define flush_cache_level_preferred() (-1)
#endif
static inline int flush_cache_level_cpu(void)
{
return flush_cache_level_preferred();
}
/*
* Flush data cache up to a certain cache level
* level - upper cache level to clean
* if level == -1, default to flush_kern_all
*/
#ifdef MULTI_CACHE
#define flush_dcache_level(level) cpu_cache.flush_kern_dcache_level(level)
#else
#define flush_dcache_level(level) __cpuc_flush_kern_all()
#endif
static inline void __flush_icache_all(void)
{
__flush_icache_preferred();
@@ -206,6 +245,12 @@ static inline void __flush_icache_all(void)
#define flush_cache_all() __cpuc_flush_kern_all()
#ifndef CONFIG_SMP
#define flush_all_cpu_caches() flush_cache_all()
#else
extern void flush_all_cpu_caches(void);
#endif
static inline void vivt_flush_cache_mm(struct mm_struct *mm)
{
if (cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm)))

View File

@@ -0,0 +1,64 @@
/*
* arch/arm/include/asm/fiq_debugger.h
*
* Copyright (C) 2010 Google, Inc.
* Author: Colin Cross <ccross@android.com>
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef _ARCH_ARM_MACH_TEGRA_FIQ_DEBUGGER_H_
#define _ARCH_ARM_MACH_TEGRA_FIQ_DEBUGGER_H_
#include <linux/serial_core.h>
#define FIQ_DEBUGGER_NO_CHAR NO_POLL_CHAR
#define FIQ_DEBUGGER_BREAK 0x00ff0100
#define FIQ_DEBUGGER_FIQ_IRQ_NAME "fiq"
#define FIQ_DEBUGGER_SIGNAL_IRQ_NAME "signal"
#define FIQ_DEBUGGER_WAKEUP_IRQ_NAME "wakeup"
/**
* struct fiq_debugger_pdata - fiq debugger platform data
* @uart_resume: used to restore uart state right before enabling
* the fiq.
* @uart_enable: Do the work necessary to communicate with the uart
* hw (enable clocks, etc.). This must be ref-counted.
* @uart_disable: Do the work necessary to disable the uart hw
* (disable clocks, etc.). This must be ref-counted.
* @uart_dev_suspend: called during PM suspend, generally not needed
* for real fiq mode debugger.
* @uart_dev_resume: called during PM resume, generally not needed
* for real fiq mode debugger.
*/
struct fiq_debugger_pdata {
int (*uart_init)(struct platform_device *pdev);
void (*uart_free)(struct platform_device *pdev);
int (*uart_resume)(struct platform_device *pdev);
int (*uart_getc)(struct platform_device *pdev);
void (*uart_putc)(struct platform_device *pdev, unsigned int c);
void (*uart_flush)(struct platform_device *pdev);
void (*uart_enable)(struct platform_device *pdev);
void (*uart_disable)(struct platform_device *pdev);
int (*uart_dev_suspend)(struct platform_device *pdev);
int (*uart_dev_resume)(struct platform_device *pdev);
void (*fiq_enable)(struct platform_device *pdev, unsigned int fiq,
bool enable);
void (*fiq_ack)(struct platform_device *pdev, unsigned int fiq);
void (*force_irq)(struct platform_device *pdev, unsigned int irq);
void (*force_irq_ack)(struct platform_device *pdev, unsigned int irq);
};
#endif

View File

@@ -0,0 +1,30 @@
/*
* Copyright (C) 2010 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef __ASM_FIQ_GLUE_H
#define __ASM_FIQ_GLUE_H
struct fiq_glue_handler {
void (*fiq)(struct fiq_glue_handler *h, void *regs, void *svc_sp);
void (*resume)(struct fiq_glue_handler *h);
};
int fiq_glue_register_handler(struct fiq_glue_handler *handler);
#ifdef CONFIG_FIQ_GLUE
void fiq_glue_resume(void);
#else
static inline void fiq_glue_resume(void) {}
#endif
#endif

View File

@@ -5,7 +5,7 @@
#include <linux/threads.h>
#include <asm/irq.h>
#define NR_IPI 5
#define NR_IPI 6
typedef struct {
unsigned int __softirq_pending;

View File

@@ -66,6 +66,7 @@
#define L2X0_STNDBY_MODE_EN (1 << 0)
/* Registers shifts and masks */
#define L2X0_CACHE_ID_REV_MASK (0x3f)
#define L2X0_CACHE_ID_PART_MASK (0xf << 6)
#define L2X0_CACHE_ID_PART_L210 (1 << 6)
#define L2X0_CACHE_ID_PART_L310 (3 << 6)
@@ -102,6 +103,8 @@
#define L2X0_ADDR_FILTER_EN 1
#define REV_PL310_R2P0 4
#ifndef __ASSEMBLY__
extern void __init l2x0_init(void __iomem *base, u32 aux_val, u32 aux_mask);
#if defined(CONFIG_CACHE_L2X0) && defined(CONFIG_OF)

View File

@@ -17,15 +17,23 @@
#define TRACER_ACCESSED_BIT 0
#define TRACER_RUNNING_BIT 1
#define TRACER_CYCLE_ACC_BIT 2
#define TRACER_TRACE_DATA_BIT 3
#define TRACER_TIMESTAMP_BIT 4
#define TRACER_BRANCHOUTPUT_BIT 5
#define TRACER_RETURN_STACK_BIT 6
#define TRACER_ACCESSED BIT(TRACER_ACCESSED_BIT)
#define TRACER_RUNNING BIT(TRACER_RUNNING_BIT)
#define TRACER_CYCLE_ACC BIT(TRACER_CYCLE_ACC_BIT)
#define TRACER_TRACE_DATA BIT(TRACER_TRACE_DATA_BIT)
#define TRACER_TIMESTAMP BIT(TRACER_TIMESTAMP_BIT)
#define TRACER_BRANCHOUTPUT BIT(TRACER_BRANCHOUTPUT_BIT)
#define TRACER_RETURN_STACK BIT(TRACER_RETURN_STACK_BIT)
#define TRACER_TIMEOUT 10000
#define etm_writel(t, v, x) \
(__raw_writel((v), (t)->etm_regs + (x)))
#define etm_readl(t, x) (__raw_readl((t)->etm_regs + (x)))
#define etm_writel(t, id, v, x) \
(__raw_writel((v), (t)->etm_regs[(id)] + (x)))
#define etm_readl(t, id, x) (__raw_readl((t)->etm_regs[(id)] + (x)))
/* CoreSight Management Registers */
#define CSMR_LOCKACCESS 0xfb0
@@ -43,7 +51,7 @@
#define ETMCTRL_POWERDOWN 1
#define ETMCTRL_PROGRAM (1 << 10)
#define ETMCTRL_PORTSEL (1 << 11)
#define ETMCTRL_DO_CONTEXTID (3 << 14)
#define ETMCTRL_CONTEXTIDSIZE(x) (((x) & 3) << 14)
#define ETMCTRL_PORTMASK1 (7 << 4)
#define ETMCTRL_PORTMASK2 (1 << 21)
#define ETMCTRL_PORTMASK (ETMCTRL_PORTMASK1 | ETMCTRL_PORTMASK2)
@@ -55,9 +63,12 @@
#define ETMCTRL_DATA_DO_BOTH (ETMCTRL_DATA_DO_DATA | ETMCTRL_DATA_DO_ADDR)
#define ETMCTRL_BRANCH_OUTPUT (1 << 8)
#define ETMCTRL_CYCLEACCURATE (1 << 12)
#define ETMCTRL_TIMESTAMP_EN (1 << 28)
#define ETMCTRL_RETURN_STACK_EN (1 << 29)
/* ETM configuration code register */
#define ETMR_CONFCODE (0x04)
#define ETMCCR_ETMIDR_PRESENT BIT(31)
/* ETM trace start/stop resource control register */
#define ETMR_TRACESSCTRL (0x18)
@@ -113,10 +124,25 @@
#define ETMR_TRACEENCTRL 0x24
#define ETMTE_INCLEXCL BIT(24)
#define ETMR_TRACEENEVT 0x20
#define ETMCTRL_OPTS (ETMCTRL_DO_CPRT | \
ETMCTRL_DATA_DO_ADDR | \
ETMCTRL_BRANCH_OUTPUT | \
ETMCTRL_DO_CONTEXTID)
#define ETMR_VIEWDATAEVT 0x30
#define ETMR_VIEWDATACTRL1 0x34
#define ETMR_VIEWDATACTRL2 0x38
#define ETMR_VIEWDATACTRL3 0x3c
#define ETMVDC3_EXCLONLY BIT(16)
#define ETMCTRL_OPTS (ETMCTRL_DO_CPRT)
#define ETMR_ID 0x1e4
#define ETMIDR_VERSION(x) (((x) >> 4) & 0xff)
#define ETMIDR_VERSION_3_1 0x21
#define ETMIDR_VERSION_PFT_1_0 0x30
#define ETMR_CCE 0x1e8
#define ETMCCER_RETURN_STACK_IMPLEMENTED BIT(23)
#define ETMCCER_TIMESTAMPING_IMPLEMENTED BIT(22)
#define ETMR_TRACEIDR 0x200
/* ETM management registers, "ETM Architecture", 3.5.24 */
#define ETMMR_OSLAR 0x300
@@ -140,14 +166,16 @@
#define ETBFF_TRIGIN BIT(8)
#define ETBFF_TRIGEVT BIT(9)
#define ETBFF_TRIGFL BIT(10)
#define ETBFF_STOPFL BIT(12)
#define etb_writel(t, v, x) \
(__raw_writel((v), (t)->etb_regs + (x)))
#define etb_readl(t, x) (__raw_readl((t)->etb_regs + (x)))
#define etm_lock(t) do { etm_writel((t), 0, CSMR_LOCKACCESS); } while (0)
#define etm_unlock(t) \
do { etm_writel((t), UNLOCK_MAGIC, CSMR_LOCKACCESS); } while (0)
#define etm_lock(t, id) \
do { etm_writel((t), (id), 0, CSMR_LOCKACCESS); } while (0)
#define etm_unlock(t, id) \
do { etm_writel((t), (id), UNLOCK_MAGIC, CSMR_LOCKACCESS); } while (0)
#define etb_lock(t) do { etb_writel((t), 0, CSMR_LOCKACCESS); } while (0)
#define etb_unlock(t) \

View File

@@ -31,6 +31,8 @@
#define GIC_DIST_TARGET 0x800
#define GIC_DIST_CONFIG 0xc00
#define GIC_DIST_SOFTINT 0xf00
#define GIC_DIST_SGI_PENDING_CLEAR 0xf10
#define GIC_DIST_SGI_PENDING_SET 0xf20
#ifndef __ASSEMBLY__
#include <linux/irqdomain.h>
@@ -52,6 +54,8 @@ static inline void gic_init(unsigned int nr, int start,
gic_init_bases(nr, start, dist, cpu, 0, NULL);
}
void gic_migrate_target(unsigned int new_cpu_id);
#endif
#endif

View File

@@ -30,6 +30,9 @@ extern void asm_do_IRQ(unsigned int, struct pt_regs *);
void handle_IRQ(unsigned int, struct pt_regs *);
void init_IRQ(void);
void arch_trigger_all_cpu_backtrace(void);
#define arch_trigger_all_cpu_backtrace arch_trigger_all_cpu_backtrace
#endif
#endif

View File

@@ -0,0 +1,28 @@
/*
* arch/arm/include/asm/mach/mmc.h
*/
#ifndef ASMARM_MACH_MMC_H
#define ASMARM_MACH_MMC_H
#include <linux/mmc/host.h>
#include <linux/mmc/card.h>
#include <linux/mmc/sdio_func.h>
struct embedded_sdio_data {
struct sdio_cis cis;
struct sdio_cccr cccr;
struct sdio_embedded_func *funcs;
int num_funcs;
};
struct mmc_platform_data {
unsigned int ocr_mask; /* available voltages */
int built_in; /* built-in device flag */
int card_present; /* card detect state */
u32 (*translate_vdd)(struct device *, unsigned int);
unsigned int (*status)(struct device *);
struct embedded_sdio_data *embedded_sdio;
int (*register_status_notify)(void (*callback)(int card_present, void *dev_id), void *dev_id);
};
#endif

View File

@@ -34,11 +34,4 @@ typedef struct {
#endif
/*
* switch_mm() may do a full cache flush over the context switch,
* so enable interrupts over the context switch to avoid high
* latency.
*/
#define __ARCH_WANT_INTERRUPTS_ON_CTXSW
#endif

View File

@@ -43,45 +43,104 @@ void __check_kvm_seq(struct mm_struct *mm);
#define ASID_FIRST_VERSION (1 << ASID_BITS)
extern unsigned int cpu_last_asid;
#ifdef CONFIG_SMP
DECLARE_PER_CPU(struct mm_struct *, current_mm);
#endif
void __init_new_context(struct task_struct *tsk, struct mm_struct *mm);
void __new_context(struct mm_struct *mm);
void cpu_set_reserved_ttbr0(void);
static inline void check_context(struct mm_struct *mm)
static inline void switch_new_context(struct mm_struct *mm)
{
/*
* This code is executed with interrupts enabled. Therefore,
* mm->context.id cannot be updated to the latest ASID version
* on a different CPU (and condition below not triggered)
* without first getting an IPI to reset the context. The
* alternative is to take a read_lock on mm->context.id_lock
* (after changing its type to rwlock_t).
*/
if (unlikely((mm->context.id ^ cpu_last_asid) >> ASID_BITS))
__new_context(mm);
unsigned long flags;
__new_context(mm);
local_irq_save(flags);
cpu_switch_mm(mm->pgd, mm);
local_irq_restore(flags);
}
static inline void check_and_switch_context(struct mm_struct *mm,
struct task_struct *tsk)
{
if (unlikely(mm->context.kvm_seq != init_mm.context.kvm_seq))
__check_kvm_seq(mm);
/*
* Required during context switch to avoid speculative page table
* walking with the wrong TTBR.
*/
cpu_set_reserved_ttbr0();
if (!((mm->context.id ^ cpu_last_asid) >> ASID_BITS))
/*
* The ASID is from the current generation, just switch to the
* new pgd. This condition is only true for calls from
* context_switch() and interrupts are already disabled.
*/
cpu_switch_mm(mm->pgd, mm);
else if (irqs_disabled())
/*
* Defer the new ASID allocation until after the context
* switch critical region since __new_context() cannot be
* called with interrupts disabled (it sends IPIs).
*/
set_ti_thread_flag(task_thread_info(tsk), TIF_SWITCH_MM);
else
/*
* That is a direct call to switch_mm() or activate_mm() with
* interrupts enabled and a new context.
*/
switch_new_context(mm);
}
#define init_new_context(tsk,mm) (__init_new_context(tsk,mm),0)
#else
static inline void check_context(struct mm_struct *mm)
#define finish_arch_post_lock_switch \
finish_arch_post_lock_switch
static inline void finish_arch_post_lock_switch(void)
{
if (test_and_clear_thread_flag(TIF_SWITCH_MM))
switch_new_context(current->mm);
}
#else /* !CONFIG_CPU_HAS_ASID */
#ifdef CONFIG_MMU
static inline void check_and_switch_context(struct mm_struct *mm,
struct task_struct *tsk)
{
if (unlikely(mm->context.kvm_seq != init_mm.context.kvm_seq))
__check_kvm_seq(mm);
#endif
if (irqs_disabled())
/*
* cpu_switch_mm() needs to flush the VIVT caches. To avoid
* high interrupt latencies, defer the call and continue
* running with the old mm. Since we only support UP systems
* on non-ASID CPUs, the old mm will remain valid until the
* finish_arch_post_lock_switch() call.
*/
set_ti_thread_flag(task_thread_info(tsk), TIF_SWITCH_MM);
else
cpu_switch_mm(mm->pgd, mm);
}
#define finish_arch_post_lock_switch \
finish_arch_post_lock_switch
static inline void finish_arch_post_lock_switch(void)
{
if (test_and_clear_thread_flag(TIF_SWITCH_MM)) {
struct mm_struct *mm = current->mm;
cpu_switch_mm(mm->pgd, mm);
}
}
#endif /* CONFIG_MMU */
#define init_new_context(tsk,mm) 0
#endif
#endif /* CONFIG_CPU_HAS_ASID */
#define destroy_context(mm) do { } while(0)
@@ -119,12 +178,7 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next,
__flush_icache_all();
#endif
if (!cpumask_test_and_set_cpu(cpu, mm_cpumask(next)) || prev != next) {
#ifdef CONFIG_SMP
struct mm_struct **crt_mm = &per_cpu(current_mm, cpu);
*crt_mm = next;
#endif
check_context(next);
cpu_switch_mm(next->pgd, next);
check_and_switch_context(next, tsk);
if (cache_is_vivt())
cpumask_clear_cpu(cpu, mm_cpumask(prev));
}

View File

@@ -7,121 +7,10 @@
*/
#ifndef _ASM_MUTEX_H
#define _ASM_MUTEX_H
#if __LINUX_ARM_ARCH__ < 6
/* On pre-ARMv6 hardware the swp based implementation is the most efficient. */
# include <asm-generic/mutex-xchg.h>
#else
/*
* Attempting to lock a mutex on ARMv6+ can be done with a bastardized
* atomic decrement (it is not a reliable atomic decrement but it satisfies
* the defined semantics for our purpose, while being smaller and faster
* than a real atomic decrement or atomic swap. The idea is to attempt
* decrementing the lock value only once. If once decremented it isn't zero,
* or if its store-back fails due to a dispute on the exclusive store, we
* simply bail out immediately through the slow path where the lock will be
* reattempted until it succeeds.
* On pre-ARMv6 hardware this results in a swp-based implementation,
* which is the most efficient. For ARMv6+, we emit a pair of exclusive
* accesses instead.
*/
static inline void
__mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *))
{
int __ex_flag, __res;
__asm__ (
"ldrex %0, [%2] \n\t"
"sub %0, %0, #1 \n\t"
"strex %1, %0, [%2] "
: "=&r" (__res), "=&r" (__ex_flag)
: "r" (&(count)->counter)
: "cc","memory" );
__res |= __ex_flag;
if (unlikely(__res != 0))
fail_fn(count);
}
static inline int
__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
{
int __ex_flag, __res;
__asm__ (
"ldrex %0, [%2] \n\t"
"sub %0, %0, #1 \n\t"
"strex %1, %0, [%2] "
: "=&r" (__res), "=&r" (__ex_flag)
: "r" (&(count)->counter)
: "cc","memory" );
__res |= __ex_flag;
if (unlikely(__res != 0))
__res = fail_fn(count);
return __res;
}
/*
* Same trick is used for the unlock fast path. However the original value,
* rather than the result, is used to test for success in order to have
* better generated assembly.
*/
static inline void
__mutex_fastpath_unlock(atomic_t *count, void (*fail_fn)(atomic_t *))
{
int __ex_flag, __res, __orig;
__asm__ (
"ldrex %0, [%3] \n\t"
"add %1, %0, #1 \n\t"
"strex %2, %1, [%3] "
: "=&r" (__orig), "=&r" (__res), "=&r" (__ex_flag)
: "r" (&(count)->counter)
: "cc","memory" );
__orig |= __ex_flag;
if (unlikely(__orig != 0))
fail_fn(count);
}
/*
* If the unlock was done on a contended lock, or if the unlock simply fails
* then the mutex remains locked.
*/
#define __mutex_slowpath_needs_to_unlock() 1
/*
* For __mutex_fastpath_trylock we use another construct which could be
* described as a "single value cmpxchg".
*
* This provides the needed trylock semantics like cmpxchg would, but it is
* lighter and less generic than a true cmpxchg implementation.
*/
static inline int
__mutex_fastpath_trylock(atomic_t *count, int (*fail_fn)(atomic_t *))
{
int __ex_flag, __res, __orig;
__asm__ (
"1: ldrex %0, [%3] \n\t"
"subs %1, %0, #1 \n\t"
"strexeq %2, %1, [%3] \n\t"
"movlt %0, #0 \n\t"
"cmpeq %2, #0 \n\t"
"bgt 1b "
: "=&r" (__orig), "=&r" (__res), "=&r" (__ex_flag)
: "r" (&count->counter)
: "cc", "memory" );
return __orig;
}
#endif
#include <asm-generic/mutex-xchg.h>
#endif

View File

@@ -0,0 +1,32 @@
/*
* arch/arm/include/asm/rodata.h
*
* Copyright (C) 2011 Google, Inc.
*
* Author: Colin Cross <ccross@android.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _ASMARM_RODATA_H
#define _ASMARM_RODATA_H
#ifndef __ASSEMBLY__
#ifdef CONFIG_DEBUG_RODATA
int set_memory_rw(unsigned long virt, int numpages);
int set_memory_ro(unsigned long virt, int numpages);
void mark_rodata_ro(void);
void set_kernel_text_rw(void);
void set_kernel_text_ro(void);
#else
static inline void set_kernel_text_rw(void) { }
static inline void set_kernel_text_ro(void) { }
#endif
#endif
#endif

View File

@@ -10,5 +10,7 @@
extern void sched_clock_postinit(void);
extern void setup_sched_clock(u32 (*read)(void), int bits, unsigned long rate);
extern void setup_sched_clock_needs_suspend(u32 (*read)(void), int bits,
unsigned long rate);
#endif

View File

@@ -90,7 +90,11 @@ extern void platform_cpu_die(unsigned int cpu);
extern int platform_cpu_kill(unsigned int cpu);
extern void platform_cpu_enable(unsigned int cpu);
extern void arm_send_ping_ipi(int cpu);
extern void arch_send_call_function_single_ipi(int cpu);
extern void arch_send_call_function_ipi_mask(const struct cpumask *mask);
extern void smp_send_all_cpu_backtrace(void);
#endif /* ifndef __ASM_ARM_SMP_H */

View File

@@ -153,6 +153,7 @@ extern int vfp_restore_user_hwstate(struct user_vfp __user *,
#define TIF_MEMDIE 18 /* is terminating due to OOM killer */
#define TIF_RESTORE_SIGMASK 20
#define TIF_SECCOMP 21
#define TIF_SWITCH_MM 22 /* deferred switch_mm */
#define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)

View File

@@ -1 +0,0 @@
vmlinux.lds

View File

@@ -21,6 +21,8 @@
#include <asm/memory.h>
#include <asm/procinfo.h>
#include <asm/hardware/cache-l2x0.h>
#include <asm/bL_entry.h>
#include <asm/bL_switcher.h>
#include <linux/kbuild.h>
/*
@@ -144,5 +146,15 @@ int main(void)
DEFINE(DMA_BIDIRECTIONAL, DMA_BIDIRECTIONAL);
DEFINE(DMA_TO_DEVICE, DMA_TO_DEVICE);
DEFINE(DMA_FROM_DEVICE, DMA_FROM_DEVICE);
BLANK();
DEFINE(BL_POWER_UP_SETUP, offsetof(struct bL_power_ops, power_up_setup));
DEFINE(BL_SYNC_CLUSTER_SIZE, sizeof(struct bL_cluster_sync_struct));
DEFINE(BL_SYNC_CLUSTER_FIRST_MAN,
offsetof(struct bL_cluster_sync_struct, first_man));
DEFINE(BL_SYNC_CLUSTER_CPUS, offsetof(struct bL_cluster_sync_struct, cpus));
DEFINE(BL_SYNC_CLUSTER_CLUSTER,
offsetof(struct bL_cluster_sync_struct, cluster));
DEFINE(BL_SYNC_CLUSTER_INBOUND,
offsetof(struct bL_cluster_sync_struct, inbound));
return 0;
}

View File

@@ -15,6 +15,7 @@
#include <linux/init.h>
#include <linux/types.h>
#include <linux/io.h>
#include <linux/slab.h>
#include <linux/sysrq.h>
#include <linux/device.h>
#include <linux/clk.h>
@@ -37,26 +38,37 @@ MODULE_AUTHOR("Alexander Shishkin");
struct tracectx {
unsigned int etb_bufsz;
void __iomem *etb_regs;
void __iomem *etm_regs;
void __iomem **etm_regs;
int etm_regs_count;
unsigned long flags;
int ncmppairs;
int etm_portsz;
int etm_contextid_size;
u32 etb_fc;
unsigned long range_start;
unsigned long range_end;
unsigned long data_range_start;
unsigned long data_range_end;
bool dump_initial_etb;
struct device *dev;
struct clk *emu_clk;
struct mutex mutex;
};
static struct tracectx tracer;
static struct tracectx tracer = {
.range_start = (unsigned long)_stext,
.range_end = (unsigned long)_etext,
};
static inline bool trace_isrunning(struct tracectx *t)
{
return !!(t->flags & TRACER_RUNNING);
}
static int etm_setup_address_range(struct tracectx *t, int n,
static int etm_setup_address_range(struct tracectx *t, int id, int n,
unsigned long start, unsigned long end, int exclude, int data)
{
u32 flags = ETMAAT_ARM | ETMAAT_IGNCONTEXTID | ETMAAT_NSONLY | \
u32 flags = ETMAAT_ARM | ETMAAT_IGNCONTEXTID | ETMAAT_IGNSECURITY |
ETMAAT_NOVALCMP;
if (n < 1 || n > t->ncmppairs)
@@ -72,95 +84,185 @@ static int etm_setup_address_range(struct tracectx *t, int n,
flags |= ETMAAT_IEXEC;
/* first comparator for the range */
etm_writel(t, flags, ETMR_COMP_ACC_TYPE(n * 2));
etm_writel(t, start, ETMR_COMP_VAL(n * 2));
etm_writel(t, id, flags, ETMR_COMP_ACC_TYPE(n * 2));
etm_writel(t, id, start, ETMR_COMP_VAL(n * 2));
/* second comparator is right next to it */
etm_writel(t, flags, ETMR_COMP_ACC_TYPE(n * 2 + 1));
etm_writel(t, end, ETMR_COMP_VAL(n * 2 + 1));
etm_writel(t, id, flags, ETMR_COMP_ACC_TYPE(n * 2 + 1));
etm_writel(t, id, end, ETMR_COMP_VAL(n * 2 + 1));
flags = exclude ? ETMTE_INCLEXCL : 0;
etm_writel(t, flags | (1 << n), ETMR_TRACEENCTRL);
if (data) {
flags = exclude ? ETMVDC3_EXCLONLY : 0;
if (exclude)
n += 8;
etm_writel(t, id, flags | BIT(n), ETMR_VIEWDATACTRL3);
} else {
flags = exclude ? ETMTE_INCLEXCL : 0;
etm_writel(t, id, flags | (1 << n), ETMR_TRACEENCTRL);
}
return 0;
}
static int trace_start_etm(struct tracectx *t, int id)
{
u32 v;
unsigned long timeout = TRACER_TIMEOUT;
v = ETMCTRL_OPTS | ETMCTRL_PROGRAM | ETMCTRL_PORTSIZE(t->etm_portsz);
v |= ETMCTRL_CONTEXTIDSIZE(t->etm_contextid_size);
if (t->flags & TRACER_CYCLE_ACC)
v |= ETMCTRL_CYCLEACCURATE;
if (t->flags & TRACER_BRANCHOUTPUT)
v |= ETMCTRL_BRANCH_OUTPUT;
if (t->flags & TRACER_TRACE_DATA)
v |= ETMCTRL_DATA_DO_ADDR;
if (t->flags & TRACER_TIMESTAMP)
v |= ETMCTRL_TIMESTAMP_EN;
if (t->flags & TRACER_RETURN_STACK)
v |= ETMCTRL_RETURN_STACK_EN;
etm_unlock(t, id);
etm_writel(t, id, v, ETMR_CTRL);
while (!(etm_readl(t, id, ETMR_CTRL) & ETMCTRL_PROGRAM) && --timeout)
;
if (!timeout) {
dev_dbg(t->dev, "Waiting for progbit to assert timed out\n");
etm_lock(t, id);
return -EFAULT;
}
if (t->range_start || t->range_end)
etm_setup_address_range(t, id, 1,
t->range_start, t->range_end, 0, 0);
else
etm_writel(t, id, ETMTE_INCLEXCL, ETMR_TRACEENCTRL);
etm_writel(t, id, 0, ETMR_TRACEENCTRL2);
etm_writel(t, id, 0, ETMR_TRACESSCTRL);
etm_writel(t, id, 0x6f, ETMR_TRACEENEVT);
etm_writel(t, id, 0, ETMR_VIEWDATACTRL1);
etm_writel(t, id, 0, ETMR_VIEWDATACTRL2);
if (t->data_range_start || t->data_range_end)
etm_setup_address_range(t, id, 2, t->data_range_start,
t->data_range_end, 0, 1);
else
etm_writel(t, id, ETMVDC3_EXCLONLY, ETMR_VIEWDATACTRL3);
etm_writel(t, id, 0x6f, ETMR_VIEWDATAEVT);
v &= ~ETMCTRL_PROGRAM;
v |= ETMCTRL_PORTSEL;
etm_writel(t, id, v, ETMR_CTRL);
timeout = TRACER_TIMEOUT;
while (etm_readl(t, id, ETMR_CTRL) & ETMCTRL_PROGRAM && --timeout)
;
if (!timeout) {
dev_dbg(t->dev, "Waiting for progbit to deassert timed out\n");
etm_lock(t, id);
return -EFAULT;
}
etm_lock(t, id);
return 0;
}
static int trace_start(struct tracectx *t)
{
u32 v;
unsigned long timeout = TRACER_TIMEOUT;
int ret;
int id;
u32 etb_fc = t->etb_fc;
etb_unlock(t);
etb_writel(t, 0, ETBR_FORMATTERCTRL);
t->dump_initial_etb = false;
etb_writel(t, 0, ETBR_WRITEADDR);
etb_writel(t, etb_fc, ETBR_FORMATTERCTRL);
etb_writel(t, 1, ETBR_CTRL);
etb_lock(t);
/* configure etm */
v = ETMCTRL_OPTS | ETMCTRL_PROGRAM | ETMCTRL_PORTSIZE(t->etm_portsz);
if (t->flags & TRACER_CYCLE_ACC)
v |= ETMCTRL_CYCLEACCURATE;
etm_unlock(t);
etm_writel(t, v, ETMR_CTRL);
while (!(etm_readl(t, ETMR_CTRL) & ETMCTRL_PROGRAM) && --timeout)
;
if (!timeout) {
dev_dbg(t->dev, "Waiting for progbit to assert timed out\n");
etm_lock(t);
return -EFAULT;
/* configure etm(s) */
for (id = 0; id < t->etm_regs_count; id++) {
ret = trace_start_etm(t, id);
if (ret)
return ret;
}
etm_setup_address_range(t, 1, (unsigned long)_stext,
(unsigned long)_etext, 0, 0);
etm_writel(t, 0, ETMR_TRACEENCTRL2);
etm_writel(t, 0, ETMR_TRACESSCTRL);
etm_writel(t, 0x6f, ETMR_TRACEENEVT);
v &= ~ETMCTRL_PROGRAM;
v |= ETMCTRL_PORTSEL;
etm_writel(t, v, ETMR_CTRL);
timeout = TRACER_TIMEOUT;
while (etm_readl(t, ETMR_CTRL) & ETMCTRL_PROGRAM && --timeout)
;
if (!timeout) {
dev_dbg(t->dev, "Waiting for progbit to deassert timed out\n");
etm_lock(t);
return -EFAULT;
}
etm_lock(t);
t->flags |= TRACER_RUNNING;
return 0;
}
static int trace_stop(struct tracectx *t)
static int trace_stop_etm(struct tracectx *t, int id)
{
unsigned long timeout = TRACER_TIMEOUT;
etm_unlock(t);
etm_unlock(t, id);
etm_writel(t, 0x440, ETMR_CTRL);
while (!(etm_readl(t, ETMR_CTRL) & ETMCTRL_PROGRAM) && --timeout)
etm_writel(t, id, 0x440, ETMR_CTRL);
while (!(etm_readl(t, id, ETMR_CTRL) & ETMCTRL_PROGRAM) && --timeout)
;
if (!timeout) {
dev_dbg(t->dev, "Waiting for progbit to assert timed out\n");
etm_lock(t);
dev_err(t->dev,
"etm%d: Waiting for progbit to assert timed out\n",
id);
etm_lock(t, id);
return -EFAULT;
}
etm_lock(t);
etm_lock(t, id);
return 0;
}
static int trace_power_down_etm(struct tracectx *t, int id)
{
unsigned long timeout = TRACER_TIMEOUT;
etm_unlock(t, id);
while (!(etm_readl(t, id, ETMR_STATUS) & ETMST_PROGBIT) && --timeout)
;
if (!timeout) {
dev_err(t->dev, "etm%d: Waiting for status progbit to assert timed out\n",
id);
etm_lock(t, id);
return -EFAULT;
}
etm_writel(t, id, 0x441, ETMR_CTRL);
etm_lock(t, id);
return 0;
}
static int trace_stop(struct tracectx *t)
{
int id;
unsigned long timeout = TRACER_TIMEOUT;
u32 etb_fc = t->etb_fc;
for (id = 0; id < t->etm_regs_count; id++)
trace_stop_etm(t, id);
for (id = 0; id < t->etm_regs_count; id++)
trace_power_down_etm(t, id);
etb_unlock(t);
etb_writel(t, ETBFF_MANUAL_FLUSH, ETBR_FORMATTERCTRL);
if (etb_fc) {
etb_fc |= ETBFF_STOPFL;
etb_writel(t, t->etb_fc, ETBR_FORMATTERCTRL);
}
etb_writel(t, etb_fc | ETBFF_MANUAL_FLUSH, ETBR_FORMATTERCTRL);
timeout = TRACER_TIMEOUT;
while (etb_readl(t, ETBR_FORMATTERCTRL) &
@@ -185,24 +287,15 @@ static int trace_stop(struct tracectx *t)
static int etb_getdatalen(struct tracectx *t)
{
u32 v;
int rp, wp;
int wp;
v = etb_readl(t, ETBR_STATUS);
if (v & 1)
return t->etb_bufsz;
rp = etb_readl(t, ETBR_READADDR);
wp = etb_readl(t, ETBR_WRITEADDR);
if (rp > wp) {
etb_writel(t, 0, ETBR_READADDR);
etb_writel(t, 0, ETBR_WRITEADDR);
return 0;
}
return wp - rp;
return wp;
}
/* sysrq+v will always stop the running trace and leave it at that */
@@ -235,21 +328,18 @@ static void etm_dump(void)
printk("%08x", cpu_to_be32(etb_readl(t, ETBR_READMEM)));
printk(KERN_INFO "\n--- ETB buffer end ---\n");
/* deassert the overflow bit */
etb_writel(t, 1, ETBR_CTRL);
etb_writel(t, 0, ETBR_CTRL);
etb_writel(t, 0, ETBR_TRIGGERCOUNT);
etb_writel(t, 0, ETBR_READADDR);
etb_writel(t, 0, ETBR_WRITEADDR);
etb_lock(t);
}
static void sysrq_etm_dump(int key)
{
if (!mutex_trylock(&tracer.mutex)) {
printk(KERN_INFO "Tracing hardware busy\n");
return;
}
dev_dbg(tracer.dev, "Dumping ETB buffer\n");
etm_dump();
mutex_unlock(&tracer.mutex);
}
static struct sysrq_key_op sysrq_etm_op = {
@@ -276,6 +366,10 @@ static ssize_t etb_read(struct file *file, char __user *data,
struct tracectx *t = file->private_data;
u32 first = 0;
u32 *buf;
int wpos;
int skip;
long wlength;
loff_t pos = *ppos;
mutex_lock(&t->mutex);
@@ -287,31 +381,39 @@ static ssize_t etb_read(struct file *file, char __user *data,
etb_unlock(t);
total = etb_getdatalen(t);
if (total == 0 && t->dump_initial_etb)
total = t->etb_bufsz;
if (total == t->etb_bufsz)
first = etb_readl(t, ETBR_WRITEADDR);
if (pos > total * 4) {
skip = 0;
wpos = total;
} else {
skip = (int)pos % 4;
wpos = (int)pos / 4;
}
total -= wpos;
first = (first + wpos) % t->etb_bufsz;
etb_writel(t, first, ETBR_READADDR);
length = min(total * 4, (int)len);
buf = vmalloc(length);
wlength = min(total, DIV_ROUND_UP(skip + (int)len, 4));
length = min(total * 4 - skip, (int)len);
buf = vmalloc(wlength * 4);
dev_dbg(t->dev, "ETB buffer length: %d\n", total);
dev_dbg(t->dev, "ETB read %ld bytes to %lld from %ld words at %d\n",
length, pos, wlength, first);
dev_dbg(t->dev, "ETB buffer length: %d\n", total + wpos);
dev_dbg(t->dev, "ETB status reg: %x\n", etb_readl(t, ETBR_STATUS));
for (i = 0; i < length / 4; i++)
for (i = 0; i < wlength; i++)
buf[i] = etb_readl(t, ETBR_READMEM);
/* the only way to deassert overflow bit in ETB status is this */
etb_writel(t, 1, ETBR_CTRL);
etb_writel(t, 0, ETBR_CTRL);
etb_writel(t, 0, ETBR_WRITEADDR);
etb_writel(t, 0, ETBR_READADDR);
etb_writel(t, 0, ETBR_TRIGGERCOUNT);
etb_lock(t);
length -= copy_to_user(data, buf, length);
length -= copy_to_user(data, (u8 *)buf + skip, length);
vfree(buf);
*ppos = pos + length;
out:
mutex_unlock(&t->mutex);
@@ -348,28 +450,17 @@ static int __devinit etb_probe(struct amba_device *dev, const struct amba_id *id
if (ret)
goto out;
mutex_lock(&t->mutex);
t->etb_regs = ioremap_nocache(dev->res.start, resource_size(&dev->res));
if (!t->etb_regs) {
ret = -ENOMEM;
goto out_release;
}
t->dev = &dev->dev;
t->dump_initial_etb = true;
amba_set_drvdata(dev, t);
etb_miscdev.parent = &dev->dev;
ret = misc_register(&etb_miscdev);
if (ret)
goto out_unmap;
t->emu_clk = clk_get(&dev->dev, "emu_src_ck");
if (IS_ERR(t->emu_clk)) {
dev_dbg(&dev->dev, "Failed to obtain emu_src_ck.\n");
return -EFAULT;
}
clk_enable(t->emu_clk);
etb_unlock(t);
t->etb_bufsz = etb_readl(t, ETBR_DEPTH);
dev_dbg(&dev->dev, "Size: %x\n", t->etb_bufsz);
@@ -378,6 +469,20 @@ static int __devinit etb_probe(struct amba_device *dev, const struct amba_id *id
etb_writel(t, 0, ETBR_CTRL);
etb_writel(t, 0x1000, ETBR_FORMATTERCTRL);
etb_lock(t);
mutex_unlock(&t->mutex);
etb_miscdev.parent = &dev->dev;
ret = misc_register(&etb_miscdev);
if (ret)
goto out_unmap;
/* Get optional clock. Currently used to select clock source on omap3 */
t->emu_clk = clk_get(&dev->dev, "emu_src_ck");
if (IS_ERR(t->emu_clk))
dev_dbg(&dev->dev, "Failed to obtain emu_src_ck.\n");
else
clk_enable(t->emu_clk);
dev_dbg(&dev->dev, "ETB AMBA driver initialized.\n");
@@ -385,10 +490,13 @@ out:
return ret;
out_unmap:
mutex_lock(&t->mutex);
amba_set_drvdata(dev, NULL);
iounmap(t->etb_regs);
t->etb_regs = NULL;
out_release:
mutex_unlock(&t->mutex);
amba_release_regions(dev);
return ret;
@@ -403,8 +511,10 @@ static int etb_remove(struct amba_device *dev)
iounmap(t->etb_regs);
t->etb_regs = NULL;
clk_disable(t->emu_clk);
clk_put(t->emu_clk);
if (!IS_ERR(t->emu_clk)) {
clk_disable(t->emu_clk);
clk_put(t->emu_clk);
}
amba_release_regions(dev);
@@ -448,7 +558,10 @@ static ssize_t trace_running_store(struct kobject *kobj,
return -EINVAL;
mutex_lock(&tracer.mutex);
ret = value ? trace_start(&tracer) : trace_stop(&tracer);
if (!tracer.etb_regs)
ret = -ENODEV;
else
ret = value ? trace_start(&tracer) : trace_stop(&tracer);
mutex_unlock(&tracer.mutex);
return ret ? : n;
@@ -463,36 +576,50 @@ static ssize_t trace_info_show(struct kobject *kobj,
{
u32 etb_wa, etb_ra, etb_st, etb_fc, etm_ctrl, etm_st;
int datalen;
int id;
int ret;
etb_unlock(&tracer);
datalen = etb_getdatalen(&tracer);
etb_wa = etb_readl(&tracer, ETBR_WRITEADDR);
etb_ra = etb_readl(&tracer, ETBR_READADDR);
etb_st = etb_readl(&tracer, ETBR_STATUS);
etb_fc = etb_readl(&tracer, ETBR_FORMATTERCTRL);
etb_lock(&tracer);
mutex_lock(&tracer.mutex);
if (tracer.etb_regs) {
etb_unlock(&tracer);
datalen = etb_getdatalen(&tracer);
etb_wa = etb_readl(&tracer, ETBR_WRITEADDR);
etb_ra = etb_readl(&tracer, ETBR_READADDR);
etb_st = etb_readl(&tracer, ETBR_STATUS);
etb_fc = etb_readl(&tracer, ETBR_FORMATTERCTRL);
etb_lock(&tracer);
} else {
etb_wa = etb_ra = etb_st = etb_fc = ~0;
datalen = -1;
}
etm_unlock(&tracer);
etm_ctrl = etm_readl(&tracer, ETMR_CTRL);
etm_st = etm_readl(&tracer, ETMR_STATUS);
etm_lock(&tracer);
return sprintf(buf, "Trace buffer len: %d\nComparator pairs: %d\n"
ret = sprintf(buf, "Trace buffer len: %d\nComparator pairs: %d\n"
"ETBR_WRITEADDR:\t%08x\n"
"ETBR_READADDR:\t%08x\n"
"ETBR_STATUS:\t%08x\n"
"ETBR_FORMATTERCTRL:\t%08x\n"
"ETMR_CTRL:\t%08x\n"
"ETMR_STATUS:\t%08x\n",
"ETBR_FORMATTERCTRL:\t%08x\n",
datalen,
tracer.ncmppairs,
etb_wa,
etb_ra,
etb_st,
etb_fc,
etb_fc
);
for (id = 0; id < tracer.etm_regs_count; id++) {
etm_unlock(&tracer, id);
etm_ctrl = etm_readl(&tracer, id, ETMR_CTRL);
etm_st = etm_readl(&tracer, id, ETMR_STATUS);
etm_lock(&tracer, id);
ret += sprintf(buf + ret, "ETMR_CTRL:\t%08x\n"
"ETMR_STATUS:\t%08x\n",
etm_ctrl,
etm_st
);
}
mutex_unlock(&tracer.mutex);
return ret;
}
static struct kobj_attribute trace_info_attr =
@@ -531,42 +658,260 @@ static ssize_t trace_mode_store(struct kobject *kobj,
static struct kobj_attribute trace_mode_attr =
__ATTR(trace_mode, 0644, trace_mode_show, trace_mode_store);
static ssize_t trace_contextid_size_show(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
/* 0: No context id tracing, 1: One byte, 2: Two bytes, 3: Four bytes */
return sprintf(buf, "%d\n", (1 << tracer.etm_contextid_size) >> 1);
}
static ssize_t trace_contextid_size_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t n)
{
unsigned int contextid_size;
if (sscanf(buf, "%u", &contextid_size) != 1)
return -EINVAL;
if (contextid_size == 3 || contextid_size > 4)
return -EINVAL;
mutex_lock(&tracer.mutex);
tracer.etm_contextid_size = fls(contextid_size);
mutex_unlock(&tracer.mutex);
return n;
}
static struct kobj_attribute trace_contextid_size_attr =
__ATTR(trace_contextid_size, 0644,
trace_contextid_size_show, trace_contextid_size_store);
static ssize_t trace_branch_output_show(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
return sprintf(buf, "%d\n", !!(tracer.flags & TRACER_BRANCHOUTPUT));
}
static ssize_t trace_branch_output_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t n)
{
unsigned int branch_output;
if (sscanf(buf, "%u", &branch_output) != 1)
return -EINVAL;
mutex_lock(&tracer.mutex);
if (branch_output) {
tracer.flags |= TRACER_BRANCHOUTPUT;
/* Branch broadcasting is incompatible with the return stack */
tracer.flags &= ~TRACER_RETURN_STACK;
} else {
tracer.flags &= ~TRACER_BRANCHOUTPUT;
}
mutex_unlock(&tracer.mutex);
return n;
}
static struct kobj_attribute trace_branch_output_attr =
__ATTR(trace_branch_output, 0644,
trace_branch_output_show, trace_branch_output_store);
static ssize_t trace_return_stack_show(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
return sprintf(buf, "%d\n", !!(tracer.flags & TRACER_RETURN_STACK));
}
static ssize_t trace_return_stack_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t n)
{
unsigned int return_stack;
if (sscanf(buf, "%u", &return_stack) != 1)
return -EINVAL;
mutex_lock(&tracer.mutex);
if (return_stack) {
tracer.flags |= TRACER_RETURN_STACK;
/* Return stack is incompatible with branch broadcasting */
tracer.flags &= ~TRACER_BRANCHOUTPUT;
} else {
tracer.flags &= ~TRACER_RETURN_STACK;
}
mutex_unlock(&tracer.mutex);
return n;
}
static struct kobj_attribute trace_return_stack_attr =
__ATTR(trace_return_stack, 0644,
trace_return_stack_show, trace_return_stack_store);
static ssize_t trace_timestamp_show(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
return sprintf(buf, "%d\n", !!(tracer.flags & TRACER_TIMESTAMP));
}
static ssize_t trace_timestamp_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t n)
{
unsigned int timestamp;
if (sscanf(buf, "%u", &timestamp) != 1)
return -EINVAL;
mutex_lock(&tracer.mutex);
if (timestamp)
tracer.flags |= TRACER_TIMESTAMP;
else
tracer.flags &= ~TRACER_TIMESTAMP;
mutex_unlock(&tracer.mutex);
return n;
}
static struct kobj_attribute trace_timestamp_attr =
__ATTR(trace_timestamp, 0644,
trace_timestamp_show, trace_timestamp_store);
static ssize_t trace_range_show(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
return sprintf(buf, "%08lx %08lx\n",
tracer.range_start, tracer.range_end);
}
static ssize_t trace_range_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t n)
{
unsigned long range_start, range_end;
if (sscanf(buf, "%lx %lx", &range_start, &range_end) != 2)
return -EINVAL;
mutex_lock(&tracer.mutex);
tracer.range_start = range_start;
tracer.range_end = range_end;
mutex_unlock(&tracer.mutex);
return n;
}
static struct kobj_attribute trace_range_attr =
__ATTR(trace_range, 0644, trace_range_show, trace_range_store);
static ssize_t trace_data_range_show(struct kobject *kobj,
struct kobj_attribute *attr,
char *buf)
{
unsigned long range_start;
u64 range_end;
mutex_lock(&tracer.mutex);
range_start = tracer.data_range_start;
range_end = tracer.data_range_end;
if (!range_end && (tracer.flags & TRACER_TRACE_DATA))
range_end = 0x100000000ULL;
mutex_unlock(&tracer.mutex);
return sprintf(buf, "%08lx %08llx\n", range_start, range_end);
}
static ssize_t trace_data_range_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t n)
{
unsigned long range_start;
u64 range_end;
if (sscanf(buf, "%lx %llx", &range_start, &range_end) != 2)
return -EINVAL;
mutex_lock(&tracer.mutex);
tracer.data_range_start = range_start;
tracer.data_range_end = (unsigned long)range_end;
if (range_end)
tracer.flags |= TRACER_TRACE_DATA;
else
tracer.flags &= ~TRACER_TRACE_DATA;
mutex_unlock(&tracer.mutex);
return n;
}
static struct kobj_attribute trace_data_range_attr =
__ATTR(trace_data_range, 0644,
trace_data_range_show, trace_data_range_store);
static int __devinit etm_probe(struct amba_device *dev, const struct amba_id *id)
{
struct tracectx *t = &tracer;
int ret = 0;
void __iomem **new_regs;
int new_count;
u32 etmccr;
u32 etmidr;
u32 etmccer = 0;
u8 etm_version = 0;
if (t->etm_regs) {
dev_dbg(&dev->dev, "ETM already initialized\n");
ret = -EBUSY;
mutex_lock(&t->mutex);
new_count = t->etm_regs_count + 1;
new_regs = krealloc(t->etm_regs,
sizeof(t->etm_regs[0]) * new_count, GFP_KERNEL);
if (!new_regs) {
dev_dbg(&dev->dev, "Failed to allocate ETM register array\n");
ret = -ENOMEM;
goto out;
}
t->etm_regs = new_regs;
ret = amba_request_regions(dev, NULL);
if (ret)
goto out;
t->etm_regs = ioremap_nocache(dev->res.start, resource_size(&dev->res));
if (!t->etm_regs) {
t->etm_regs[t->etm_regs_count] =
ioremap_nocache(dev->res.start, resource_size(&dev->res));
if (!t->etm_regs[t->etm_regs_count]) {
ret = -ENOMEM;
goto out_release;
}
amba_set_drvdata(dev, t);
amba_set_drvdata(dev, t->etm_regs[t->etm_regs_count]);
mutex_init(&t->mutex);
t->dev = &dev->dev;
t->flags = TRACER_CYCLE_ACC;
t->flags = TRACER_CYCLE_ACC | TRACER_TRACE_DATA | TRACER_BRANCHOUTPUT;
t->etm_portsz = 1;
t->etm_contextid_size = 3;
etm_unlock(t);
(void)etm_readl(t, ETMMR_PDSR);
etm_unlock(t, t->etm_regs_count);
(void)etm_readl(t, t->etm_regs_count, ETMMR_PDSR);
/* dummy first read */
(void)etm_readl(&tracer, ETMMR_OSSRR);
(void)etm_readl(&tracer, t->etm_regs_count, ETMMR_OSSRR);
t->ncmppairs = etm_readl(t, ETMR_CONFCODE) & 0xf;
etm_writel(t, 0x440, ETMR_CTRL);
etm_lock(t);
etmccr = etm_readl(t, t->etm_regs_count, ETMR_CONFCODE);
t->ncmppairs = etmccr & 0xf;
if (etmccr & ETMCCR_ETMIDR_PRESENT) {
etmidr = etm_readl(t, t->etm_regs_count, ETMR_ID);
etm_version = ETMIDR_VERSION(etmidr);
if (etm_version >= ETMIDR_VERSION_3_1)
etmccer = etm_readl(t, t->etm_regs_count, ETMR_CCE);
}
etm_writel(t, t->etm_regs_count, 0x441, ETMR_CTRL);
etm_writel(t, t->etm_regs_count, new_count, ETMR_TRACEIDR);
etm_lock(t, t->etm_regs_count);
ret = sysfs_create_file(&dev->dev.kobj,
&trace_running_attr.attr);
@@ -582,35 +927,100 @@ static int __devinit etm_probe(struct amba_device *dev, const struct amba_id *id
if (ret)
dev_dbg(&dev->dev, "Failed to create trace_mode in sysfs\n");
dev_dbg(t->dev, "ETM AMBA driver initialized.\n");
ret = sysfs_create_file(&dev->dev.kobj,
&trace_contextid_size_attr.attr);
if (ret)
dev_dbg(&dev->dev,
"Failed to create trace_contextid_size in sysfs\n");
ret = sysfs_create_file(&dev->dev.kobj,
&trace_branch_output_attr.attr);
if (ret)
dev_dbg(&dev->dev,
"Failed to create trace_branch_output in sysfs\n");
if (etmccer & ETMCCER_RETURN_STACK_IMPLEMENTED) {
ret = sysfs_create_file(&dev->dev.kobj,
&trace_return_stack_attr.attr);
if (ret)
dev_dbg(&dev->dev,
"Failed to create trace_return_stack in sysfs\n");
}
if (etmccer & ETMCCER_TIMESTAMPING_IMPLEMENTED) {
ret = sysfs_create_file(&dev->dev.kobj,
&trace_timestamp_attr.attr);
if (ret)
dev_dbg(&dev->dev,
"Failed to create trace_timestamp in sysfs\n");
}
ret = sysfs_create_file(&dev->dev.kobj, &trace_range_attr.attr);
if (ret)
dev_dbg(&dev->dev, "Failed to create trace_range in sysfs\n");
if (etm_version < ETMIDR_VERSION_PFT_1_0) {
ret = sysfs_create_file(&dev->dev.kobj,
&trace_data_range_attr.attr);
if (ret)
dev_dbg(&dev->dev,
"Failed to create trace_data_range in sysfs\n");
} else {
tracer.flags &= ~TRACER_TRACE_DATA;
}
dev_dbg(&dev->dev, "ETM AMBA driver initialized.\n");
/* Enable formatter if there are multiple trace sources */
if (new_count > 1)
t->etb_fc = ETBFF_ENFCONT | ETBFF_ENFTC;
t->etm_regs_count = new_count;
out:
mutex_unlock(&t->mutex);
return ret;
out_unmap:
amba_set_drvdata(dev, NULL);
iounmap(t->etm_regs);
iounmap(t->etm_regs[t->etm_regs_count]);
out_release:
amba_release_regions(dev);
mutex_unlock(&t->mutex);
return ret;
}
static int etm_remove(struct amba_device *dev)
{
struct tracectx *t = amba_get_drvdata(dev);
amba_set_drvdata(dev, NULL);
iounmap(t->etm_regs);
t->etm_regs = NULL;
amba_release_regions(dev);
int i;
struct tracectx *t = &tracer;
void __iomem *etm_regs = amba_get_drvdata(dev);
sysfs_remove_file(&dev->dev.kobj, &trace_running_attr.attr);
sysfs_remove_file(&dev->dev.kobj, &trace_info_attr.attr);
sysfs_remove_file(&dev->dev.kobj, &trace_mode_attr.attr);
sysfs_remove_file(&dev->dev.kobj, &trace_range_attr.attr);
sysfs_remove_file(&dev->dev.kobj, &trace_data_range_attr.attr);
amba_set_drvdata(dev, NULL);
mutex_lock(&t->mutex);
for (i = 0; i < t->etm_regs_count; i++)
if (t->etm_regs[i] == etm_regs)
break;
for (; i < t->etm_regs_count - 1; i++)
t->etm_regs[i] = t->etm_regs[i + 1];
t->etm_regs_count--;
if (!t->etm_regs_count) {
kfree(t->etm_regs);
t->etm_regs = NULL;
}
mutex_unlock(&t->mutex);
iounmap(etm_regs);
amba_release_regions(dev);
return 0;
}
@@ -620,6 +1030,10 @@ static struct amba_id etm_ids[] = {
.id = 0x0003b921,
.mask = 0x0007ffff,
},
{
.id = 0x0003b950,
.mask = 0x0007ffff,
},
{ 0, 0 },
};
@@ -637,6 +1051,8 @@ static int __init etm_init(void)
{
int retval;
mutex_init(&tracer.mutex);
retval = amba_driver_register(&etb_driver);
if (retval) {
printk(KERN_ERR "Failed to register etb\n");

View File

@@ -13,6 +13,7 @@
*/
#include <linux/ftrace.h>
#include <linux/module.h>
#include <linux/uaccess.h>
#include <asm/cacheflush.h>
@@ -63,6 +64,20 @@ static unsigned long adjust_address(struct dyn_ftrace *rec, unsigned long addr)
}
#endif
int ftrace_arch_code_modify_prepare(void)
{
set_kernel_text_rw();
set_all_modules_text_rw();
return 0;
}
int ftrace_arch_code_modify_post_process(void)
{
set_all_modules_text_ro();
set_kernel_text_ro();
return 0;
}
static unsigned long ftrace_call_replace(unsigned long pc, unsigned long addr)
{
return arm_gen_branch_link(pc, addr);
@@ -179,20 +194,21 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
old = *parent;
*parent = return_hooker;
trace.func = self_addr;
trace.depth = current->curr_ret_stack + 1;
/* Only trace if the calling function expects to */
if (!ftrace_graph_entry(&trace)) {
*parent = old;
return;
}
err = ftrace_push_return_trace(old, self_addr, &trace.depth,
frame_pointer);
if (err == -EBUSY) {
*parent = old;
return;
}
trace.func = self_addr;
/* Only trace if the calling function expects to */
if (!ftrace_graph_entry(&trace)) {
current->curr_ret_stack--;
*parent = old;
}
}
#ifdef CONFIG_DYNAMIC_FTRACE

View File

@@ -10,6 +10,8 @@
#include <linux/export.h>
#include <linux/init.h>
#include <linux/device.h>
#include <linux/notifier.h>
#include <linux/cpu.h>
#include <linux/syscore_ops.h>
#include <linux/string.h>
@@ -103,6 +105,25 @@ static struct syscore_ops leds_syscore_ops = {
.resume = leds_resume,
};
static int leds_idle_notifier(struct notifier_block *nb, unsigned long val,
void *data)
{
switch (val) {
case IDLE_START:
leds_event(led_idle_start);
break;
case IDLE_END:
leds_event(led_idle_end);
break;
}
return 0;
}
static struct notifier_block leds_idle_nb = {
.notifier_call = leds_idle_notifier,
};
static int __init leds_init(void)
{
int ret;
@@ -111,8 +132,11 @@ static int __init leds_init(void)
ret = device_register(&leds_device);
if (ret == 0)
ret = device_create_file(&leds_device, &dev_attr_event);
if (ret == 0)
if (ret == 0) {
register_syscore_ops(&leds_syscore_ops);
idle_notifier_register(&leds_idle_nb);
}
return ret;
}

View File

@@ -31,9 +31,10 @@
#include <linux/random.h>
#include <linux/hw_breakpoint.h>
#include <linux/cpuidle.h>
#include <linux/console.h>
#include <linux/cpufreq.h>
#include <asm/cacheflush.h>
#include <asm/leds.h>
#include <asm/processor.h>
#include <asm/thread_notify.h>
#include <asm/stacktrace.h>
@@ -60,6 +61,18 @@ extern void setup_mm_for_reboot(void);
static volatile int hlt_counter;
#ifdef CONFIG_SMP
void arch_trigger_all_cpu_backtrace(void)
{
smp_send_all_cpu_backtrace();
}
#else
void arch_trigger_all_cpu_backtrace(void)
{
dump_stack();
}
#endif
void disable_hlt(void)
{
hlt_counter++;
@@ -92,6 +105,31 @@ __setup("hlt", hlt_setup);
extern void call_with_stack(void (*fn)(void *), void *arg, void *sp);
typedef void (*phys_reset_t)(unsigned long);
#ifdef CONFIG_ARM_FLUSH_CONSOLE_ON_RESTART
void arm_machine_flush_console(void)
{
printk("\n");
pr_emerg("Restarting %s\n", linux_banner);
if (console_trylock()) {
console_unlock();
return;
}
mdelay(50);
local_irq_disable();
if (!console_trylock())
pr_emerg("arm_restart: Console was locked! Busting\n");
else
pr_emerg("arm_restart: Console was locked!\n");
console_unlock();
}
#else
void arm_machine_flush_console(void)
{
}
#endif
/*
* A temporary stack to use for CPU reset. This is static so that we
* don't clobber it with the identity mapping. When running with this
@@ -207,9 +245,9 @@ void cpu_idle(void)
/* endless idle loop with no priority at all */
while (1) {
idle_notifier_call_chain(IDLE_START);
tick_nohz_idle_enter();
rcu_idle_enter();
leds_event(led_idle_start);
while (!need_resched()) {
#ifdef CONFIG_HOTPLUG_CPU
if (cpu_is_offline(smp_processor_id()))
@@ -240,9 +278,9 @@ void cpu_idle(void)
} else
local_irq_enable();
}
leds_event(led_idle_end);
rcu_idle_exit();
tick_nohz_idle_exit();
idle_notifier_call_chain(IDLE_END);
schedule_preempt_disabled();
}
}
@@ -260,6 +298,15 @@ __setup("reboot=", reboot_setup);
void machine_shutdown(void)
{
#ifdef CONFIG_SMP
/*
* Disable preemption so we're guaranteed to
* run to power off or reboot and prevent
* the possibility of switching to another
* thread that might wind up blocking on
* one of the stopped CPUs.
*/
preempt_disable();
smp_send_stop();
#endif
}
@@ -281,6 +328,10 @@ void machine_restart(char *cmd)
{
machine_shutdown();
/* Flush the console to make sure all the relevant messages make it
* out to the console drivers */
arm_machine_flush_console();
arm_pm_restart(reboot_mode, cmd);
/* Give a grace period for failure to restart of 1s */
@@ -291,6 +342,77 @@ void machine_restart(char *cmd)
while (1);
}
/*
* dump a block of kernel memory from around the given address
*/
static void show_data(unsigned long addr, int nbytes, const char *name)
{
int i, j;
int nlines;
u32 *p;
/*
* don't attempt to dump non-kernel addresses or
* values that are probably just small negative numbers
*/
if (addr < PAGE_OFFSET || addr > -256UL)
return;
printk("\n%s: %#lx:\n", name, addr);
/*
* round address down to a 32 bit boundary
* and always dump a multiple of 32 bytes
*/
p = (u32 *)(addr & ~(sizeof(u32) - 1));
nbytes += (addr & (sizeof(u32) - 1));
nlines = (nbytes + 31) / 32;
for (i = 0; i < nlines; i++) {
/*
* just display low 16 bits of address to keep
* each line of the dump < 80 characters
*/
printk("%04lx ", (unsigned long)p & 0xffff);
for (j = 0; j < 8; j++) {
u32 data;
if (probe_kernel_address(p, data)) {
printk(" ********");
} else {
printk(" %08x", data);
}
++p;
}
printk("\n");
}
}
static void show_extra_register_data(struct pt_regs *regs, int nbytes)
{
mm_segment_t fs;
fs = get_fs();
set_fs(KERNEL_DS);
show_data(regs->ARM_pc - nbytes, nbytes * 2, "PC");
show_data(regs->ARM_lr - nbytes, nbytes * 2, "LR");
show_data(regs->ARM_sp - nbytes, nbytes * 2, "SP");
show_data(regs->ARM_ip - nbytes, nbytes * 2, "IP");
show_data(regs->ARM_fp - nbytes, nbytes * 2, "FP");
show_data(regs->ARM_r0 - nbytes, nbytes * 2, "R0");
show_data(regs->ARM_r1 - nbytes, nbytes * 2, "R1");
show_data(regs->ARM_r2 - nbytes, nbytes * 2, "R2");
show_data(regs->ARM_r3 - nbytes, nbytes * 2, "R3");
show_data(regs->ARM_r4 - nbytes, nbytes * 2, "R4");
show_data(regs->ARM_r5 - nbytes, nbytes * 2, "R5");
show_data(regs->ARM_r6 - nbytes, nbytes * 2, "R6");
show_data(regs->ARM_r7 - nbytes, nbytes * 2, "R7");
show_data(regs->ARM_r8 - nbytes, nbytes * 2, "R8");
show_data(regs->ARM_r9 - nbytes, nbytes * 2, "R9");
show_data(regs->ARM_r10 - nbytes, nbytes * 2, "R10");
set_fs(fs);
}
void __show_regs(struct pt_regs *regs)
{
unsigned long flags;
@@ -349,7 +471,49 @@ void __show_regs(struct pt_regs *regs)
printk("Control: %08x%s\n", ctrl, buf);
}
#endif
#ifdef CONFIG_CPU_CP15
{
unsigned long reg0, reg1, reg2, reg3;
asm ("mrc p15, 0, %0, c0, c0, 5\n": "=r" (reg0));
if (reg0 & (1 << 31))
/* MPIDR */
printk("CPU %ld / CLUSTER %ld\n",
reg0 & 0x3, (reg0 >> 8) & 0xF);
asm ("mrc p15, 0, %0, c5, c0, 0\n\t"
"mrc p15, 0, %1, c5, c1, 0\n"
: "=r" (reg0), "=r" (reg1));
asm ("mrc p15, 0, %0, c5, c0, 1\n\t"
"mrc p15, 0, %1, c5, c1, 1\n"
: "=r" (reg2), "=r" (reg3));
printk("DFSR: %08lx, ADFSR: %08lx, IFSR: %08lx, AIFSR: %08lx\n",
reg0, reg1, reg2, reg3);
asm ("mrc p15, 0, %0, c0, c0, 0\n": "=r" (reg0));
if (((reg0 >> 4) & 0xFFF) == 0xC0F) { /* Cortex-A15 */
asm ("mrrc p15, 0, %0, %1, c15\n\t"
"mrrc p15, 1, %2, %3, c15\n"
: "=r" (reg0), "=r" (reg1),
"=r" (reg2), "=r" (reg3));
printk("CPUMERRSR: %08lx_%08lx, L2MERRSR: %08lx_%08lx\n",
reg1, reg0, reg3, reg2);
}
}
#endif
printk("CPUFREQ: %d KHz\n", cpufreq_get(raw_smp_processor_id()));
#ifdef CONFIG_ARM_EXYNOS5410_BUS_DEVFREQ
{
extern unsigned long curr_mif_freq;
printk("MIFFREQ: %ld KHz\n", curr_mif_freq);
}
#endif
show_extra_register_data(regs, 128);
}
void show_regs(struct pt_regs * regs)

View File

@@ -21,6 +21,8 @@ struct clock_data {
u32 epoch_cyc_copy;
u32 mult;
u32 shift;
bool suspended;
bool needs_suspend;
};
static void sched_clock_poll(unsigned long wrap_ticks);
@@ -49,6 +51,9 @@ static unsigned long long cyc_to_sched_clock(u32 cyc, u32 mask)
u64 epoch_ns;
u32 epoch_cyc;
if (cd.suspended)
return cd.epoch_ns;
/*
* Load the epoch_cyc and epoch_ns atomically. We do this by
* ensuring that we always write epoch_cyc, epoch_ns and
@@ -98,6 +103,13 @@ static void sched_clock_poll(unsigned long wrap_ticks)
update_sched_clock();
}
void __init setup_sched_clock_needs_suspend(u32 (*read)(void), int bits,
unsigned long rate)
{
setup_sched_clock(read, bits, rate);
cd.needs_suspend = true;
}
void __init setup_sched_clock(u32 (*read)(void), int bits, unsigned long rate)
{
unsigned long r, w;
@@ -169,11 +181,23 @@ void __init sched_clock_postinit(void)
static int sched_clock_suspend(void)
{
sched_clock_poll(sched_clock_timer.data);
if (cd.needs_suspend)
cd.suspended = true;
return 0;
}
static void sched_clock_resume(void)
{
if (cd.needs_suspend) {
cd.epoch_cyc = read_sched_clock();
cd.epoch_cyc_copy = cd.epoch_cyc;
cd.suspended = false;
}
}
static struct syscore_ops sched_clock_ops = {
.suspend = sched_clock_suspend,
.resume = sched_clock_resume,
};
static int __init sched_clock_syscore_init(void)

View File

@@ -268,6 +268,19 @@ static int cpu_has_aliasing_icache(unsigned int arch)
int aliasing_icache;
unsigned int id_reg, num_sets, line_size;
#ifdef CONFIG_BL_SWITCHER
/*
* We expect a combination of Cortex-A15 and Cortex-A7 cores.
* A7 = VIPT aliasing I-cache
* A15 = PIPT (non-aliasing) I-cache
* To cater for this discrepancy, let's assume aliasing I-cache
* all the time. This means unneeded extra work on the A15 but
* only ptrace is affected which is not performance critical.
*/
if ((read_cpuid_id() & 0xff0ffff0) == 0x410fc0f0)
return 1;
#endif
/* PIPT caches never alias. */
if (icache_is_pipt())
return 0;

View File

@@ -642,7 +642,7 @@ static void do_signal(struct pt_regs *regs, int syscall)
}
}
if (try_to_freeze())
if (try_to_freeze_nowarn())
goto no_signal;
/*

View File

@@ -51,11 +51,13 @@
struct secondary_data secondary_data;
enum ipi_msg_type {
IPI_TIMER = 2,
IPI_PING = 1,
IPI_TIMER,
IPI_RESCHEDULE,
IPI_CALL_FUNC,
IPI_CALL_FUNC_SINGLE,
IPI_CPU_STOP,
IPI_CPU_BACKTRACE,
};
static DECLARE_COMPLETION(cpu_running);
@@ -242,6 +244,20 @@ static void __cpuinit smp_store_cpu_info(unsigned int cpuid)
static void percpu_timer_setup(void);
/*
* Skip the secondary calibration on architectures sharing clock
* with primary cpu. Archs can use ARCH_SKIP_SECONDARY_CALIBRATE
* for this.
*/
static inline int skip_secondary_calibrate(void)
{
#ifdef CONFIG_ARCH_SKIP_SECONDARY_CALIBRATE
return 0;
#else
return -ENXIO;
#endif
}
/*
* This is the secondary CPU boot entry. We're using this CPUs
* idle thread stack, but a set of temporary page tables.
@@ -275,7 +291,8 @@ asmlinkage void __cpuinit secondary_start_kernel(void)
notify_cpu_starting(cpu);
calibrate_delay();
if (skip_secondary_calibrate())
calibrate_delay();
smp_store_cpu_info(cpu);
@@ -366,6 +383,11 @@ void __init set_smp_cross_call(void (*fn)(const struct cpumask *, unsigned int))
smp_cross_call = fn;
}
void arm_send_ping_ipi(int cpu)
{
smp_cross_call(cpumask_of(cpu), IPI_PING);
}
void arch_send_call_function_ipi_mask(const struct cpumask *mask)
{
smp_cross_call(mask, IPI_CALL_FUNC);
@@ -383,6 +405,7 @@ static const char *ipi_types[NR_IPI] = {
S(IPI_CALL_FUNC, "Function call interrupts"),
S(IPI_CALL_FUNC_SINGLE, "Single function call interrupts"),
S(IPI_CPU_STOP, "CPU stop interrupts"),
S(IPI_CPU_BACKTRACE, "CPU backtrace"),
};
void show_ipi_list(struct seq_file *p, int prec)
@@ -514,6 +537,58 @@ static void ipi_cpu_stop(unsigned int cpu)
cpu_relax();
}
static cpumask_t backtrace_mask;
static DEFINE_RAW_SPINLOCK(backtrace_lock);
/* "in progress" flag of arch_trigger_all_cpu_backtrace */
static unsigned long backtrace_flag;
void smp_send_all_cpu_backtrace(void)
{
unsigned int this_cpu = smp_processor_id();
int i;
if (test_and_set_bit(0, &backtrace_flag))
/*
* If there is already a trigger_all_cpu_backtrace() in progress
* (backtrace_flag == 1), don't output double cpu dump infos.
*/
return;
cpumask_copy(&backtrace_mask, cpu_online_mask);
cpu_clear(this_cpu, backtrace_mask);
pr_info("Backtrace for cpu %d (current):\n", this_cpu);
dump_stack();
pr_info("\nsending IPI to all other CPUs:\n");
smp_cross_call(&backtrace_mask, IPI_CPU_BACKTRACE);
/* Wait for up to 10 seconds for all other CPUs to do the backtrace */
for (i = 0; i < 10 * 1000; i++) {
if (cpumask_empty(&backtrace_mask))
break;
mdelay(1);
}
clear_bit(0, &backtrace_flag);
smp_mb__after_clear_bit();
}
/*
* ipi_cpu_backtrace - handle IPI from smp_send_all_cpu_backtrace()
*/
static void ipi_cpu_backtrace(unsigned int cpu, struct pt_regs *regs)
{
if (cpu_isset(cpu, backtrace_mask)) {
raw_spin_lock(&backtrace_lock);
pr_warning("IPI backtrace for cpu %d\n", cpu);
show_regs(regs);
raw_spin_unlock(&backtrace_lock);
cpu_clear(cpu, backtrace_mask);
}
}
/*
* Main handler for inter-processor interrupts
*/
@@ -531,6 +606,9 @@ void handle_IPI(int ipinr, struct pt_regs *regs)
__inc_irq_stat(cpu, ipi_irqs[ipinr - IPI_TIMER]);
switch (ipinr) {
case IPI_PING:
break;
case IPI_TIMER:
irq_enter();
ipi_timer();
@@ -559,6 +637,10 @@ void handle_IPI(int ipinr, struct pt_regs *regs)
irq_exit();
break;
case IPI_CPU_BACKTRACE:
ipi_cpu_backtrace(cpu, regs);
break;
default:
printk(KERN_CRIT "CPU%u: Unknown IPI message 0x%x\n",
cpu, ipinr);
@@ -610,3 +692,16 @@ int setup_profiling_timer(unsigned int multiplier)
{
return -EINVAL;
}
static void flush_all_cpu_cache(void *info)
{
flush_dcache_level(flush_cache_level_cpu());
}
void flush_all_cpu_caches(void)
{
preempt_disable();
smp_call_function(flush_all_cpu_cache, NULL, 1);
flush_cache_all();
preempt_enable();
}

View File

@@ -26,7 +26,18 @@ void __cpu_suspend_save(u32 *ptr, u32 ptrsz, u32 sp, u32 *save_ptr)
cpu_do_suspend(ptr);
flush_cache_all();
flush_dcache_level(flush_cache_level_cpu());
/*
* flush_dcache_level does not guarantee that
* save_ptr and ptr are cleaned to main memory,
* just up to the required cache level.
* Since the context pointer and context itself
* are to be retrieved with the MMU off that
* data must be cleaned from all cache levels
* to main memory using "area" cache primitives.
*/
__cpuc_flush_dcache_area(phys_to_virt(*save_ptr), ptrsz);
__cpuc_flush_dcache_area(save_ptr, sizeof(*save_ptr));
outer_clean_range(*save_ptr, *save_ptr + ptrsz);
outer_clean_range(virt_to_phys(save_ptr),
virt_to_phys(save_ptr) + sizeof(*save_ptr));

View File

@@ -11,20 +11,33 @@ if ARCH_EXYNOS
menu "SAMSUNG EXYNOS SoCs Support"
choice
prompt "EXYNOS System Type"
default ARCH_EXYNOS5
config ARCH_EXYNOS4
bool "SAMSUNG EXYNOS4"
default y
select HAVE_SMP
select MIGHT_HAVE_CACHE_L2X0
select ARCH_SPARSEMEM_ENABLE
select ARCH_HAS_HOLES_MEMORYMODEL
select ARM_ERRATA_761320 if SMP
select ARM_ERRATA_764369
help
Samsung EXYNOS4 SoCs based systems
config ARCH_EXYNOS5
bool "SAMSUNG EXYNOS5"
select HAVE_SMP
select ARCH_NEEDS_CPU_IDLE_COUPLED
select HAVE_EXYNOS5_HSI2C if I2C
select ARM_ERRATA_773022
select ARM_ERRATA_774769
help
Samsung EXYNOS5 (Cortex-A15) SoC based systems
endchoice
comment "EXYNOS SoCs"
config CPU_EXYNOS4210
@@ -46,6 +59,9 @@ config SOC_EXYNOS4212
select SAMSUNG_DMADEV
select S5P_PM if PM
select S5P_SLEEP if PM
select ARCH_HAS_OPP
select PM_OPP if PM
select PM_GENERIC_DOMAINS if PM_RUNTIME
help
Enable EXYNOS4212 SoC support
@@ -54,6 +70,9 @@ config SOC_EXYNOS4412
default y
depends on ARCH_EXYNOS4
select SAMSUNG_DMADEV
select ARCH_HAS_OPP
select PM_OPP if PM
select PM_GENERIC_DOMAINS if PM_RUNTIME
help
Enable EXYNOS4412 SoC support
@@ -61,16 +80,40 @@ config SOC_EXYNOS5250
bool "SAMSUNG EXYNOS5250"
default y
depends on ARCH_EXYNOS5
select SAMSUNG_DMADEV
select S5P_PM if PM
select S5P_SLEEP if PM
select PM_GENERIC_DOMAINS if PM_RUNTIME
select ARM_ERRATA_766421
help
Enable EXYNOS5250 SoC support
config EXYNOS_CONTENT_PATH_PROTECTION
bool "Exynos Content Path Protection"
depends on (ARM_TRUSTZONE && ARCH_EXYNOS5)
default n
help
Enable content path protection of EXYNOS.
config SOC_EXYNOS5410
bool "SAMSUNG EXYNOS5410"
default y
depends on ARCH_EXYNOS5
select SAMSUNG_DMADEV
select S5P_PM if PM
select S5P_SLEEP if PM
select PM_GENERIC_DOMAINS if PM_RUNTIME
help
Enable EXYNOS5410 SoC support
config EXYNOS4_MCT
bool
default y
select HAVE_SCHED_CLOCK
help
Use MCT (Multi Core Timer) as kernel timers
config EXYNOS4_DEV_DMA
config EXYNOS_DEV_DMA
bool
help
Compile in amba device definitions for DMA controller
@@ -85,21 +128,96 @@ config EXYNOS4_SETUP_FIMD0
help
Common setup code for FIMD0.
config EXYNOS4_DEV_SYSMMU
config EXYNOS_SETUP_FIMD1
bool
help
Common setup code for SYSTEM MMU in EXYNOS4
Common setup code for FIMD1.
config EXYNOS4_DEV_DWMCI
config EXYNOS_SETUP_ADC
bool
help
Common setup code for ADC.
config EXYNOS_SETUP_DP
bool
depends on S5P_DP
default y
help
Common setup code for DP.
config EXYNOS_DEV_SYSMMU
bool
help
Common setup code for SYSTEM MMU in EXYNOS
config EXYNOS_DEV_DWMCI
bool
help
Compile in platform device definitions for DWMCI
config EXYNOS4_DEV_FIMC_LITE
bool
depends on VIDEO_EXYNOS_FIMC_LITE
default y
help
Compile in platform device definitions for FIMC_LITE
config EXYNOS4_DEV_FIMC_IS
bool
depends on (VIDEO_EXYNOS4_FIMC_IS)
default y
help
Compile in platform device definition for FIMC-IS
config EXYNOS5_DEV_FIMC_IS
bool
depends on (VIDEO_EXYNOS5_FIMC_IS)
default y
help
Compile in platform device definition for FIMC-IS
config EXYNOS_DEV_ROTATOR
bool
help
Compile in platform device definitions for EXYNOS ROTATOR
NOTE: EXYNOS4 is not supported yet, it will be implemented.
config EXYNOS4_DEV_USB_OHCI
bool
help
Compile in platform device definition for USB OHCI
config EXYNOS5_DEV_USB3_DRD
bool
help
Compile in platform device definition for EXYNOS5 SuperSpeed USB 3.0
DRD controller.
config EXYNOS_DEV_USB_SWITCH
bool
help
Compile in platform device definitions for USB-SWITCH
config EXYNOS5_DEV_HSI2C0
bool
help
Compile in platform device definitions for HS-I2C channel 0
config EXYNOS5_DEV_HSI2C1
bool
help
Compile in platform device definitions for HS-I2C channel 1
config EXYNOS5_DEV_HSI2C2
bool
help
Compile in platform device definitions for HS-I2C channel 2
config EXYNOS5_DEV_HSI2C3
bool
help
Compile in platform device definitions for HS-I2C channel 3
config EXYNOS4_SETUP_I2C1
bool
help
@@ -135,11 +253,36 @@ config EXYNOS4_SETUP_I2C7
help
Common setup code for i2c bus 7.
config EXYNOS5_SETUP_HSI2C0
bool
help
Common setup code for hs-i2c bus 0.
config EXYNOS5_SETUP_HSI2C1
bool
help
Common setup code for hs-i2c bus 1.
config EXYNOS5_SETUP_HSI2C2
bool
help
Common setup code for hs-i2c bus 2.
config EXYNOS5_SETUP_HSI2C3
bool
help
Common setup code for hs-i2c bus 3.
config EXYNOS4_SETUP_KEYPAD
bool
help
Common setup code for keypad.
config EXYNOS4_SETUP_MFC
bool
help
Common setup code for MFC.
config EXYNOS4_SETUP_SDHCI
bool
select EXYNOS4_SETUP_SDHCI_GPIO
@@ -161,11 +304,101 @@ config EXYNOS4_SETUP_USB_PHY
help
Common setup code for USB PHY controller
config EXYNOS4_SETUP_SPI
config EXYNOS4_SETUP_FIMC_IS
bool
depends on (VIDEO_EXYNOS4_FIMC_IS)
default y
help
Common setup code for the FIMC-IS-MC
config EXYNOS5_SETUP_FIMC_IS
bool
depends on (VIDEO_EXYNOS5_FIMC_IS)
default y
help
Common setup code for the FIMC-IS-MC
config EXYNOS_SETUP_SPI
bool
help
Common setup code for SPI GPIO configurations.
config EXYNOS_FIQ_DEBUGGER
bool "Exynos FIQ debugger support"
depends on FIQ_DEBUGGER
default y
help
Exynos platform support for the FIQ debugger
config EXYNOS5_CORESIGHT
bool "EXYNOS5 embedded trace support"
depends on ARCH_EXYNOS5
select OC_ETM
help
Enable embedded trace support
config EXYNOS_PERSISTENT_CLOCK
bool
depends on !RTC_DRV_S3C
default n
help
Persistent-clock-only driver for EXYNOS RTC.
config EXYNOS_DEV_TMU
bool
help
Compile in platform device definitions for TMU
config EXYNOS_TMU
bool "Use thermal management"
depends on CPU_FREQ
help
Common setup code for TMU
config EXYNOS5_DEV_BTS
bool
depends on ARCH_EXYNOS5
select S5P_DEV_BTS
help
Compile in platform device definitions for BTS devices
config EXYNOS5_CCI
bool "Cache Coherent Interconnect support"
depends on SOC_EXYNOS5410 && ARM_EXYNOS_IKS_CORE
default y
help
Enable Cache Coherent Interconnect support
config EXYNOS5410_BTS
bool "Bus traffic shaper support"
default y
depends on SOC_EXYNOS5410
help
Enable BTS (Bus traffic shaper) support
config EXYNOS5_CLUSTER_POWER_CONTROL
bool "Dynamic cluster power control support"
depends on SOC_EXYNOS5410
default y
help
Enable dynamic cluster power control support.
If A15 cluster power is off, T32 cannot attach
to both A7 and A15 cores in the system.
config EXYNOS5410_DEBUG
bool "ARM Debug architecture v7.1 support"
depends on SOC_EXYNOS5410
default y
help
Enable preserve debug logic state.
config EXYNOS5_DYNAMIC_CPU_HOTPLUG
bool "Dynamic CPU Hotplug support"
depends on SOC_EXYNOS5410
default y
help
Enable Dynamic CPU Hotplug
# machine support
if ARCH_EXYNOS4
@@ -202,10 +435,9 @@ config MACH_SMDKV310
select SAMSUNG_DEV_BACKLIGHT
select EXYNOS4_DEV_AHCI
select SAMSUNG_DEV_KEYPAD
select EXYNOS4_DEV_DMA
select SAMSUNG_DEV_PWM
select EXYNOS_DEV_DMA
select EXYNOS4_DEV_USB_OHCI
select EXYNOS4_DEV_SYSMMU
select EXYNOS4_SETUP_FIMD0
select EXYNOS4_SETUP_I2C1
select EXYNOS4_SETUP_KEYPAD
@@ -222,9 +454,8 @@ config MACH_ARMLEX4210
select S3C_DEV_HSMMC
select S3C_DEV_HSMMC2
select S3C_DEV_HSMMC3
select EXYNOS_DEV_DMA
select EXYNOS4_DEV_AHCI
select EXYNOS4_DEV_DMA
select EXYNOS4_DEV_SYSMMU
select EXYNOS4_SETUP_SDHCI
help
Machine support for Samsung ARMLEX4210 based on EXYNOS4210
@@ -254,7 +485,7 @@ config MACH_UNIVERSAL_C210
select S5P_DEV_MFC
select S5P_DEV_ONENAND
select S5P_DEV_TV
select EXYNOS4_DEV_DMA
select EXYNOS_DEV_DMA
select EXYNOS4_SETUP_FIMD0
select EXYNOS4_SETUP_I2C1
select EXYNOS4_SETUP_I2C3
@@ -290,7 +521,7 @@ config MACH_NURI
select S5P_DEV_MFC
select S5P_DEV_USB_EHCI
select S5P_SETUP_MIPIPHY
select EXYNOS4_DEV_DMA
select EXYNOS_DEV_DMA
select EXYNOS4_SETUP_FIMC
select EXYNOS4_SETUP_FIMD0
select EXYNOS4_SETUP_I2C1
@@ -325,7 +556,7 @@ config MACH_ORIGEN
select S5P_DEV_USB_EHCI
select SAMSUNG_DEV_BACKLIGHT
select SAMSUNG_DEV_PWM
select EXYNOS4_DEV_DMA
select EXYNOS_DEV_DMA
select EXYNOS4_DEV_USB_OHCI
select EXYNOS4_SETUP_FIMD0
select EXYNOS4_SETUP_SDHCI
@@ -337,23 +568,47 @@ comment "EXYNOS4212 Boards"
config MACH_SMDK4212
bool "SMDK4212"
select SOC_BUS
select SOC_EXYNOS4212
select S3C_DEV_HSMMC2
select S3C_DEV_HSMMC3
select S3C_DEV_HWMON if S3C_ADC
select S3C_DEV_I2C1
select S3C_DEV_I2C3
select S3C_DEV_I2C4
select S3C_DEV_I2C5
select S3C_DEV_I2C7
select S3C_DEV_RTC
select S3C_DEV_WDT
select S3C_DEV_FIMD0
select S5P_DEV_FIMC0
select S5P_DEV_FIMC1
select S5P_DEV_FIMC2
select S5P_DEV_FIMC3
select S5P_DEV_CSIS0
select S5P_DEV_CSIS1
select S5P_DEV_FLITE0
select S5P_DEV_FLITE1
select S5P_GPIO_INT
select S5P_DEV_FIMD0
select S5P_DEV_MFC
select S5P_GPIO_INT
select SAMSUNG_DEV_ADC
select SAMSUNG_DEV_BACKLIGHT
select SAMSUNG_DEV_KEYPAD
select SAMSUNG_DEV_PWM
select EXYNOS4_DEV_DMA
select EXYNOS_DEV_DMA
select EXYNOS_DEV_DWMCI
select EXYNOS_DEV_SYSMMU
select EXYNOS4_SETUP_I2C1
select EXYNOS4_SETUP_I2C3
select EXYNOS4_SETUP_I2C4
select EXYNOS4_SETUP_I2C5
select EXYNOS4_SETUP_I2C7
select EXYNOS4_SETUP_KEYPAD
select EXYNOS4_SETUP_SDHCI
select EXYNOS4_SETUP_FIMD0
select EXYNOS4_SETUP_MFC
help
Machine support for Samsung SMDK4212
@@ -363,10 +618,173 @@ config MACH_SMDK4412
bool "SMDK4412"
select SOC_EXYNOS4412
select MACH_SMDK4212
select S3C_DEV_USB_HSOTG
select S5P_DEV_USB_EHCI
select EXYNOS4_DEV_USB_OHCI
select EXYNOS4_SETUP_USB_PHY
help
Machine support for Samsung SMDK4412
endif
if ARCH_EXYNOS5
comment "EXYNOS5250 Boards"
config MACH_SMDK5250
bool "SMDK5250"
select SOC_EXYNOS5250
select S3C_DEV_I2C1
select S3C_DEV_I2C2
select S3C_DEV_I2C4
select S3C_DEV_I2C5
select S3C_DEV_I2C7
select S3C_DEV_RTC
select S3C_DEV_WDT
select S5P_DEV_MFC
select S5P_DEV_DP
select S5P_DEV_FIMD1
select S5P_DEV_FIMG2D
select S5P_DEV_TV
select S5P_DEV_I2C_HDMIPHY
select S5P_DEV_USB_EHCI
select S5P_GPIO_INT
select EXYNOS_DEV_DMA
select EXYNOS_DEV_SYSMMU
select EXYNOS_DEV_DWMCI
select EXYNOS_DEV_SS_UDC
select EXYNOS_DEV_DWC3
select EXYNOS_SETUP_ADC
select EXYNOS_SETUP_DP
select EXYNOS_SETUP_FIMD1
select EXYNOS_DEV_ROTATOR
select EXYNOS_DEV_TMU
select EXYNOS4_DEV_FIMC_IS
select EXYNOS4_DEV_USB_OHCI
select EXYNOS4_SETUP_I2C1
select EXYNOS4_SETUP_I2C2
select EXYNOS4_SETUP_I2C4
select EXYNOS4_SETUP_I2C5
select EXYNOS4_SETUP_I2C7
select EXYNOS4_SETUP_MFC
select EXYNOS4_SETUP_USB_PHY
select EXYNOS4_SETUP_FIMC_IS
select SAMSUNG_DEV_ADC
select SAMSUNG_DEV_BACKLIGHT
select SAMSUNG_DEV_PWM
select S3C64XX_DEV_SPI0
select S3C64XX_DEV_SPI1
select S3C64XX_DEV_SPI2
select EXYNOS_SETUP_SPI
select EXYNOS5_DEV_BTS
help
Machine support for Samsung SMDK5250
comment "EXYNOS5410 Boards"
config MACH_SMDK5410
bool "SMDK5410"
select SOC_EXYNOS5410
select S3C_DEV_RTC
select S3C_DEV_WDT
select S3C_DEV_I2C1
select S3C_DEV_I2C2
select S3C_DEV_I2C3
select S5P_GPIO_INT
select S5P_DEV_TV
select S5P_DEV_FIMD1
select S5P_DEV_USB_EHCI
select S5P_DEV_CSIS0
select S5P_DEV_CSIS1
select S5P_DEV_CSIS2
select S5P_DEV_MFC
select SAMSUNG_DEV_ADC
select S5P_DEV_FIMG2D
select EXYNOS_DEV_DWMCI
select EXYNOS_DEV_DMA
select EXYNOS_DEV_ROTATOR
select SAMSUNG_DEV_BACKLIGHT
select SAMSUNG_DEV_PWM
select EXYNOS_DEV_DWMCI
select EXYNOS_DEV_SYSMMU
select EXYNOS_DEV_TMU
select EXYNOS_SETUP_FIMD1
select EXYNOS_DEV_ROTATOR
select EXYNOS_DEV_USB_SWITCH
select EXYNOS4_SETUP_I2C1
select EXYNOS4_SETUP_I2C2
select EXYNOS4_SETUP_I2C3
select EXYNOS4_SETUP_USB_PHY
select EXYNOS4_SETUP_MFC
select EXYNOS4_DEV_USB_OHCI
select EXYNOS5_DEV_HSI2C0
select EXYNOS5_DEV_HSI2C1
select EXYNOS5_DEV_HSI2C2
select EXYNOS5_DEV_HSI2C3
select EXYNOS5_DEV_SCALER
select EXYNOS5_DEV_GSC
select EXYNOS5_DEV_USB3_DRD
select EXYNOS5_SETUP_HSI2C0
select EXYNOS5_SETUP_HSI2C1
select EXYNOS5_SETUP_HSI2C2
select EXYNOS5_SETUP_HSI2C3
select S3C64XX_DEV_SPI0
select S3C64XX_DEV_SPI1
select S3C64XX_DEV_SPI2
select S3C64XX_DEV_SPI3
select EXYNOS_SETUP_SPI
select EXYNOS5_DEV_FIMC_IS
select EXYNOS5_SETUP_FIMC_IS
help
Machine support for Samsung SMDK5410
comment "ODROID EXYNOS5 Boards"
config MACH_ODROIDXU
bool "ODROIDXU"
select SOC_EXYNOS5410
select S3C_DEV_RTC
select S3C_DEV_WDT
select S3C_DEV_I2C1
select S3C_DEV_I2C2
select S5P_GPIO_INT
select S5P_DEV_TV
select S5P_DEV_FIMD1
select S5P_DEV_USB_EHCI
select S5P_DEV_CSIS0
select S5P_DEV_CSIS1
select S5P_DEV_CSIS2
select S5P_DEV_MFC
select SAMSUNG_DEV_ADC
select S5P_DEV_FIMG2D
select EXYNOS_DEV_DWMCI
select EXYNOS_DEV_DMA
select EXYNOS_DEV_ROTATOR
select SAMSUNG_DEV_BACKLIGHT
select SAMSUNG_DEV_PWM
select EXYNOS_DEV_DWMCI
select EXYNOS_DEV_SYSMMU
select EXYNOS_DEV_TMU
select EXYNOS_SETUP_FIMD1
select EXYNOS_DEV_ROTATOR
select EXYNOS4_SETUP_I2C1
select EXYNOS4_SETUP_I2C2
select EXYNOS4_SETUP_USB_PHY
select EXYNOS4_SETUP_MFC
select EXYNOS4_DEV_USB_OHCI
select EXYNOS5_DEV_HSI2C0
select EXYNOS5_DEV_HSI2C1
select EXYNOS5_DEV_SCALER
select EXYNOS5_DEV_GSC
select EXYNOS5_DEV_USB3_DRD
select EXYNOS5_SETUP_HSI2C0
select EXYNOS5_SETUP_HSI2C1
select S3C64XX_DEV_SPI1
select EXYNOS_SETUP_SPI
help
Machine support for Hardkernel ODROIDXU
endif
comment "Flattened Device Tree based board for EXYNOS SoCs"
config MACH_EXYNOS4_DT
@@ -385,15 +803,69 @@ config MACH_EXYNOS4_DT
config MACH_EXYNOS5_DT
bool "SAMSUNG EXYNOS5 Machine using device tree"
depends on ARCH_EXYNOS5
select SOC_EXYNOS5250
select SOC_EXYNOS5410
select USE_OF
select ARM_AMBA
help
Machine support for Samsung Exynos4 machine with device tree enabled.
Select this if a fdt blob is available for the EXYNOS4 SoC based board.
config EXYNOS5_DEV_GSC
bool
help
Compile in platform device definitions for GSC
config EXYNOS5_DEV_SCALER
bool
help
Compile in platform device definition for SCALER
config EXYNOS5_DEV_JPEG
bool
depends on VIDEO_EXYNOS_JPEG
default y
help
Compile in platform device definitions for JPEG
config EXYNOS5_SETUP_JPEG
bool
depends on VIDEO_EXYNOS_JPEG
default y
help
Common setup code for JPEG
config EXYNOS5_DEV_JPEG_HX
bool
depends on VIDEO_EXYNOS_JPEG
default y
help
Compile in platform device definitions for JPEG
config EXYNOS5_SETUP_JPEG_HX
bool
depends on VIDEO_EXYNOS_JPEG
default y
help
Common setup code for JPEG
config EXYNOS4_SETUP_CSIS
bool
depends on VIDEO_FIMC_MIPI
default y
help
Common setup code for MIPI-CSIS
config EXYNOS5_SETUP_TVOUT
bool
default y
help
Common setup code for TVOUT
if ARCH_EXYNOS4
menu "MMC/SD slot setup"
depends on PLAT_S5P
comment "Configuration for HSMMC 8-bit bus width"
config EXYNOS4_SDHCI_CH0_8BIT
@@ -407,8 +879,47 @@ config EXYNOS4_SDHCI_CH2_8BIT
help
Support HSMMC Channel 2 8-bit bus.
If selected, Channel 3 is disabled.
endmenu
endif
comment "Configuration for Memory base address"
config EXYNOS_MEM_BASE
hex "Memory base address"
default 0x40000000
help
Memory base address for Exynos series.
endmenu
endif
if ARCH_EXYNOS5
menu "SD/MMC/SDIO Support"
config EXYNOS_EMMC_HS200
bool "eMMC HS200 Mode support"
default n
help
Enable HS200 mode foe eMMC device
endmenu
menu "SD/MMC Clock Source Select"
choice
prompt "SDMMC Clock Source"
default SDMMC_CLOCK_CPLL
config SDMMC_CLOCK_CPLL
bool "SDMMC Base Clock CPLL"
help
SDMMC Base Clock 640MHz CPLL Select
config SDMMC_CLOCK_EPLL
bool "SDMMC Base Clock EPLL"
help
SDMMC Base Clock 400MHz EPLL Select
endchoice
endmenu
endif

View File

@@ -13,16 +13,29 @@ obj- :=
# Core
obj-$(CONFIG_ARCH_EXYNOS) += common.o
obj-$(CONFIG_ARCH_EXYNOS4) += clock-exynos4.o
obj-$(CONFIG_ARCH_EXYNOS5) += clock-exynos5.o
obj-$(CONFIG_ARCH_EXYNOS4) += clock-exynos4.o asv.o asv-4x12.o
obj-$(CONFIG_ARM_TRUSTZONE) += irq-sgi.o
obj-$(CONFIG_CPU_EXYNOS4210) += clock-exynos4210.o
obj-$(CONFIG_SOC_EXYNOS4212) += clock-exynos4212.o
obj-$(CONFIG_SOC_EXYNOS5250) += clock-exynos5250.o
obj-$(CONFIG_SOC_EXYNOS5410) += clock-exynos5410.o asv-exynos.o asv-exynos5410.o cci.o exynos-power-mode.o
obj-$(CONFIG_SOC_EXYNOS5410) += exynos-interface.o
obj-$(CONFIG_EXYNOS5_DEV_BTS) += dev-bts.o
obj-$(CONFIG_EXYNOS5410_BTS) += bts-exynos5410.o
obj-$(CONFIG_PM) += pm.o
obj-$(CONFIG_PM_GENERIC_DOMAINS) += pm_domains.o
obj-$(CONFIG_CPU_IDLE) += cpuidle.o
obj-$(CONFIG_ARCH_EXYNOS4) += pmu.o
ifeq ($(CONFIG_SOC_EXYNOS5250),y)
obj-$(CONFIG_CPU_IDLE) += cpuidle-exynos5250.o
else
obj-$(CONFIG_CPU_IDLE) += cpuidle.o
endif
obj-$(CONFIG_SOC_EXYNOS5250) += ori-asv-exynos.o ori-abb-exynos.o ori-asv-exynos5250.o
obj-$(CONFIG_ARCH_EXYNOS) += pmu.o
obj-$(CONFIG_SMP) += platsmp.o headsmp.o
@@ -30,6 +43,21 @@ obj-$(CONFIG_EXYNOS4_MCT) += mct.o
obj-$(CONFIG_HOTPLUG_CPU) += hotplug.o
obj-$(CONFIG_ARCH_EXYNOS) += clock-audss.o
obj-$(CONFIG_EXYNOS_FIQ_DEBUGGER) += exynos_fiq_debugger.o
obj-$(CONFIG_EXYNOS_BUSFREQ_OPP) += ppmu.o busfreq_opp_exynos4.o busfreq_opp_4x12.o
obj-$(CONFIG_EXYNOS5_CORESIGHT) += coresight-exynos5.o
obj-$(CONFIG_EXYNOS_PERSISTENT_CLOCK) += persistent_clock.o
obj-$(CONFIG_EXYNOS5410_DEBUG) += debug_exynos5410.o
obj-$(CONFIG_ARM_TRUSTZONE) += smc.o
plus_sec := $(call as-instr,.arch_extension sec,+sec)
AFLAGS_smc.o :=-Wa,-march=armv7-a$(plus_sec)
# machine support
obj-$(CONFIG_MACH_SMDKC210) += mach-smdkv310.o
@@ -41,23 +69,74 @@ obj-$(CONFIG_MACH_ORIGEN) += mach-origen.o
obj-$(CONFIG_MACH_SMDK4212) += mach-smdk4x12.o
obj-$(CONFIG_MACH_SMDK4412) += mach-smdk4x12.o
obj-$(CONFIG_MACH_SMDK4412) += board-smdk4x12-mmc.o
obj-$(CONFIG_MACH_SMDK4412) += board-smdk4x12-audio.o
obj-$(CONFIG_MACH_SMDK4412) += board-smdk4x12-display.o
obj-$(CONFIG_MACH_SMDK4412) += board-smdk4x12-usb.o
obj-$(CONFIG_MACH_SMDK4412) += board-smdk4x12-media.o
obj-$(CONFIG_MACH_SMDK4412) += board-smdk4x12-power.o
obj-$(CONFIG_MACH_SMDK5410) += mach-smdk5410.o
obj-$(CONFIG_MACH_SMDK5410) += board-smdk5410-mmc.o
obj-$(CONFIG_MACH_SMDK5410) += board-smdk5410-power.o
obj-$(CONFIG_MACH_SMDK5410) += board-smdk5410-usb.o
obj-$(CONFIG_MACH_SMDK5410) += board-smdk5410-audio.o
obj-$(CONFIG_MACH_SMDK5410) += board-smdk5410-input.o
obj-$(CONFIG_MACH_SMDK5410) += board-smdk5410-clock.o
obj-$(CONFIG_MACH_SMDK5410) += board-smdk5410-media.o
obj-$(CONFIG_MACH_SMDK5410) += board-smdk5410-display.o
obj-$(CONFIG_MACH_ODROIDXU) += mach-odroid-xu.o
obj-$(CONFIG_MACH_ODROIDXU) += board-odroidxu-mmc.o
obj-$(CONFIG_MACH_ODROIDXU) += board-odroidxu-power.o
obj-$(CONFIG_MACH_ODROIDXU) += board-odroidxu-usb.o
obj-$(CONFIG_MACH_ODROIDXU) += board-odroidxu-audio.o
obj-$(CONFIG_MACH_ODROIDXU) += board-odroidxu-input.o
obj-$(CONFIG_MACH_ODROIDXU) += board-odroidxu-clock.o
obj-$(CONFIG_MACH_ODROIDXU) += board-odroidxu-media.o
obj-$(CONFIG_MACH_ODROIDXU) += board-odroidxu-display.o
obj-$(CONFIG_MACH_EXYNOS4_DT) += mach-exynos4-dt.o
obj-$(CONFIG_MACH_EXYNOS5_DT) += mach-exynos5-dt.o
obj-$(CONFIG_MACH_SMDK5250) += mach-smdk5250.o
# device support
obj-y += dev-uart.o
obj-$(CONFIG_ARCH_EXYNOS4) += dev-audio.o
obj-$(CONFIG_ARCH_EXYNOS) += dev-audio.o
obj-$(CONFIG_EXYNOS4_DEV_AHCI) += dev-ahci.o
obj-$(CONFIG_EXYNOS4_DEV_SYSMMU) += dev-sysmmu.o
obj-$(CONFIG_EXYNOS4_DEV_DWMCI) += dev-dwmci.o
obj-$(CONFIG_EXYNOS4_DEV_DMA) += dma.o
obj-$(CONFIG_EXYNOS_DEV_DWMCI) += dev-dwmci.o
obj-$(CONFIG_EXYNOS4_DEV_FIMC_IS) += dev-fimc-is.o
obj-$(CONFIG_EXYNOS5_DEV_FIMC_IS) += dev-fimc-is.o
obj-$(CONFIG_EXYNOS4_DEV_FIMC_LITE) += dev-fimc-lite.o
obj-$(CONFIG_EXYNOS5_DEV_GSC) += dev-gsc.o setup-gsc.o
obj-$(CONFIG_EXYNOS5_DEV_SCALER) += dev-scaler.o
obj-$(CONFIG_EXYNOS_DEV_ROTATOR) += dev-rotator.o
obj-$(CONFIG_EXYNOS_DEV_SYSMMU) += dev-sysmmu.o
obj-$(CONFIG_EXYNOS_DEV_DMA) += dma.o
obj-$(CONFIG_EXYNOS_DEV_USB_SWITCH) += dev-usb-switch.o
obj-$(CONFIG_EXYNOS4_DEV_USB_OHCI) += dev-ohci.o
obj-$(CONFIG_EXYNOS5_DEV_USB3_DRD) += dev-usb3-drd.o
obj-$(CONFIG_EXYNOS5_DEV_JPEG) += dev-jpeg.o
obj-$(CONFIG_EXYNOS5_DEV_JPEG_HX) += dev-jpeg-hx.o
obj-$(CONFIG_EXYNOS_DEV_TMU) += dev-tmu.o
obj-$(CONFIG_EXYNOS5_DEV_HSI2C0) += dev-hs-i2c0.o
obj-$(CONFIG_EXYNOS5_DEV_HSI2C1) += dev-hs-i2c1.o
obj-$(CONFIG_EXYNOS5_DEV_HSI2C2) += dev-hs-i2c2.o
obj-$(CONFIG_EXYNOS5_DEV_HSI2C3) += dev-hs-i2c3.o
obj-$(CONFIG_ARCH_EXYNOS) += setup-i2c0.o
obj-$(CONFIG_EXYNOS4_SETUP_FIMC) += setup-fimc.o
obj-$(CONFIG_EXYNOS4_SETUP_CSIS) += setup-csis.o
obj-$(CONFIG_EXYNOS4_SETUP_FIMD0) += setup-fimd0.o
obj-$(CONFIG_EXYNOS_SETUP_FIMD1) += setup-fimd1.o
obj-$(CONFIG_EXYNOS_SETUP_DP) += setup-dp.o
obj-$(CONFIG_FB_MIPI_DSIM) += setup-mipidsim.o
obj-$(CONFIG_EXYNOS_CONTENT_PATH_PROTECTION) += secmem.o
obj-$(CONFIG_EXYNOS4_SETUP_FIMC_IS) += setup-fimc-is.o
obj-$(CONFIG_EXYNOS5_SETUP_FIMC_IS) += setup-fimc-is.o
obj-$(CONFIG_VISION_MODE) += setup-fimc-is-sensor.o
obj-$(CONFIG_EXYNOS4_SETUP_I2C1) += setup-i2c1.o
obj-$(CONFIG_EXYNOS4_SETUP_I2C2) += setup-i2c2.o
obj-$(CONFIG_EXYNOS4_SETUP_I2C3) += setup-i2c3.o
@@ -65,7 +144,28 @@ obj-$(CONFIG_EXYNOS4_SETUP_I2C4) += setup-i2c4.o
obj-$(CONFIG_EXYNOS4_SETUP_I2C5) += setup-i2c5.o
obj-$(CONFIG_EXYNOS4_SETUP_I2C6) += setup-i2c6.o
obj-$(CONFIG_EXYNOS4_SETUP_I2C7) += setup-i2c7.o
obj-$(CONFIG_EXYNOS5_SETUP_HSI2C0) += setup-hs-i2c0.o
obj-$(CONFIG_EXYNOS5_SETUP_HSI2C1) += setup-hs-i2c1.o
obj-$(CONFIG_EXYNOS5_SETUP_HSI2C2) += setup-hs-i2c2.o
obj-$(CONFIG_EXYNOS5_SETUP_HSI2C3) += setup-hs-i2c3.o
obj-$(CONFIG_EXYNOS4_SETUP_KEYPAD) += setup-keypad.o
obj-$(CONFIG_EXYNOS4_SETUP_MFC) += setup-mfc.o
obj-$(CONFIG_EXYNOS4_SETUP_SDHCI_GPIO) += setup-sdhci-gpio.o
obj-$(CONFIG_EXYNOS4_SETUP_USB_PHY) += setup-usb-phy.o
obj-$(CONFIG_EXYNOS4_SETUP_SPI) += setup-spi.o
ifeq ($(CONFIG_MACH_ODROIDXU),y)
obj-$(CONFIG_EXYNOS4_SETUP_USB_PHY) += board-odroidxu-setup-usb.o
else
obj-$(CONFIG_EXYNOS4_SETUP_USB_PHY) += setup-usb-phy.o
endif
obj-$(CONFIG_EXYNOS_SETUP_SPI) += setup-spi.o
obj-$(CONFIG_EXYNOS5_SETUP_TVOUT) += setup-tvout.o
obj-$(CONFIG_EXYNOS_SETUP_ADC) += setup-adc.o
obj-$(CONFIG_ION_EXYNOS) += dev-ion.o
obj-$(CONFIG_CMA) += reserve-mem.o
obj-$(CONFIG_EXYNOS5_SETUP_JPEG) += setup-jpeg.o
obj-$(CONFIG_EXYNOS5_SETUP_JPEG_HX) += setup-jpeg-hx.o
obj-$(CONFIG_ARCH_EXYNOS5) += resetreason.o
obj-$(CONFIG_EXYNOS_TMU) += tmu-exynos.o
obj-$(CONFIG_BL_SWITCHER) += bL_control.o bL_setup.o

View File

@@ -1,2 +1,10 @@
zreladdr-y += 0x40008000
params_phys-y := 0x40000100
__ZRELADDR := $(shell /bin/bash -c 'printf "0x%08x" \
$$[$(CONFIG_EXYNOS_MEM_BASE) + 0x8000]')
__PARAMS_PHYS := $(shell /bin/bash -c 'printf "0x%08x" \
$$[$(CONFIG_EXYNOS_MEM_BASE) + 0x100]')
zreladdr-y += $(__ZRELADDR)
params_phys-y := $(__PARAMS_PHYS)
dtb-$(CONFIG_MACH_EXYNOS5_DT) += exynos5410-smdk5410.dtb

View File

@@ -0,0 +1,295 @@
/* linux/arch/arm/mach-exynos/asv-4x12.c
*
* Copyright (c) 2012 Samsung Electronics Co., Ltd.
* http://www.samsung.com/
*
* EXYNOS4X12 - ASV(Adaptive Supply Voltage) driver
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/init.h>
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/err.h>
#include <linux/clk.h>
#include <linux/io.h>
#include <mach/asv.h>
#include <mach/map.h>
#include <plat/cpu.h>
/* ASV function for Fused Chip */
#define IDS_ARM_OFFSET 24
#define IDS_ARM_MASK 0xFF
#define HPM_OFFSET 12
#define HPM_MASK 0x1F
#define FUSED_SG_OFFSET 3
#define ORIG_SG_OFFSET 17
#define ORIG_SG_MASK 0xF
#define MOD_SG_OFFSET 21
#define MOD_SG_MASK 0x7
#define LOCKING_OFFSET 7
#define LOCKING_MASK 0x1F
#define EMA_OFFSET 6
#define EMA_MASK 0x1
#define DEFAULT_ASV_GROUP 1
#define CHIP_ID_REG (S5P_VA_CHIPID + 0x4)
struct asv_judge_table exynos4x12_limit[] = {
/* HPM, IDS */
{ 0, 0}, /* Reserved Group */
{ 0, 0}, /* Reserved Group */
{ 14, 9},
{ 16, 14},
{ 18, 17},
{ 20, 20},
{ 21, 24},
{ 22, 30},
{ 23, 34},
{ 24, 39},
{100, 100},
{999, 999}, /* Reserved Group */
};
struct asv_judge_table exynos4x12_prime_limit[] = {
/* HPM, IDS */
{ 0, 0}, /* Reserved Group */
{ 15, 8},
{ 16, 11},
{ 18, 14},
{ 19, 18},
{ 20, 22},
{ 21, 26},
{ 22, 29},
{ 23, 36},
{ 24, 40},
{ 25, 45},
{ 26, 50},
{999, 999}, /* Reserved Group */
};
struct asv_judge_table exynos4212_limit[] = {
/* HPM, IDS */
{ 0, 0}, /* Reserved Group */
{ 17, 12},
{ 18, 13},
{ 20, 14},
{ 22, 18},
{ 24, 22},
{ 25, 29},
{ 26, 31},
{ 27, 35},
{ 28, 39},
{100, 100},
{999, 999}, /* Reserved Group */
};
static int exynos4x12_get_hpm(struct samsung_asv *asv_info)
{
asv_info->hpm_result = (asv_info->pkg_id >> HPM_OFFSET) & HPM_MASK;
return 0;
}
static int exynos4x12_get_ids(struct samsung_asv *asv_info)
{
asv_info->ids_result = (asv_info->pkg_id >> IDS_ARM_OFFSET) & IDS_ARM_MASK;
return 0;
}
static void exynos4x12_pre_set_abb(void)
{
switch (exynos_result_of_asv) {
case 0:
case 1:
case 2:
case 3:
exynos4x12_set_abb(ABB_MODE_100V);
break;
default:
exynos4x12_set_abb(ABB_MODE_130V);
break;
}
}
static void exynos4x12_prime_pre_set_abb(void)
{
/* ABB setting for ARM */
switch (exynos_result_of_asv) {
case 0:
case 1:
exynos4x12_set_abb_member(ABB_ARM, ABB_MODE_070V);
break;
case 2:
exynos4x12_set_abb_member(ABB_ARM, ABB_MODE_100V);
break;
default:
exynos4x12_set_abb_member(ABB_ARM, ABB_MODE_130V);
break;
}
/* ABB setting for INT */
switch (exynos_result_of_asv) {
case 0:
case 1:
case 2:
exynos4x12_set_abb_member(ABB_INT, ABB_MODE_100V);
break;
default:
exynos4x12_set_abb_member(ABB_INT, ABB_MODE_130V);
break;
}
/* ABB setting for MIF */
switch (exynos_result_of_asv) {
case 0:
case 1:
exynos4x12_set_abb_member(ABB_MIF, ABB_MODE_100V);
break;
default:
exynos4x12_set_abb_member(ABB_MIF, ABB_MODE_140V);
break;
}
/* ABB setting for G3D */
switch (exynos_result_of_asv) {
case 0:
case 1:
case 2:
case 3:
case 4:
case 5:
case 6:
case 7:
exynos4x12_set_abb_member(ABB_G3D, ABB_MODE_100V);
break;
default:
exynos4x12_set_abb_member(ABB_G3D, ABB_MODE_130V);
break;
}
}
static int exynos4x12_asv_store_result(struct samsung_asv *asv_info)
{
unsigned int i;
if (soc_is_exynos4412()) {
if (samsung_rev() >= EXYNOS4412_REV_2_0) {
for (i = 0; i < ARRAY_SIZE(exynos4x12_prime_limit); i++) {
if ((asv_info->ids_result <= exynos4x12_prime_limit[i].ids_limit) ||
(asv_info->hpm_result <= exynos4x12_prime_limit[i].hpm_limit)) {
exynos_result_of_asv = i;
break;
}
}
} else {
for (i = 0; i < ARRAY_SIZE(exynos4x12_limit); i++) {
if ((asv_info->ids_result <= exynos4x12_limit[i].ids_limit) ||
(asv_info->hpm_result <= exynos4x12_limit[i].hpm_limit)) {
exynos_result_of_asv = i;
break;
}
}
}
} else {
for (i = 0; i < ARRAY_SIZE(exynos4212_limit); i++) {
if ((asv_info->ids_result <= exynos4212_limit[i].ids_limit) ||
(asv_info->hpm_result <= exynos4212_limit[i].hpm_limit)) {
exynos_result_of_asv = i;
break;
}
}
}
/*
* If ASV result value is lower than default value
* Fix with default value.
*/
if (exynos_result_of_asv < DEFAULT_ASV_GROUP)
exynos_result_of_asv = DEFAULT_ASV_GROUP;
pr_info("EXYNOS4X12(NO SG): IDS : %d HPM : %d RESULT : %d\n",
asv_info->ids_result, asv_info->hpm_result, exynos_result_of_asv);
if (samsung_rev() >= EXYNOS4412_REV_2_0)
exynos4x12_prime_pre_set_abb();
else
exynos4x12_pre_set_abb();
return 0;
}
int exynos4x12_asv_init(struct samsung_asv *asv_info)
{
unsigned int tmp;
unsigned int exynos_orig_sp;
unsigned int exynos_mod_sp;
int exynos_cal_asv;
exynos_result_of_asv = 0;
pr_info("EXYNOS4X12: Adaptive Support Voltage init\n");
tmp = __raw_readl(CHIP_ID_REG);
/* Store PKG_ID */
asv_info->pkg_id = tmp;
if ((tmp >> EMA_OFFSET) & EMA_MASK)
exynos_dynamic_ema = true;
/* If Speed group is fused, get speed group from */
if ((tmp >> FUSED_SG_OFFSET) & 0x1) {
exynos_orig_sp = (tmp >> ORIG_SG_OFFSET) & ORIG_SG_MASK;
exynos_mod_sp = (tmp >> MOD_SG_OFFSET) & MOD_SG_MASK;
exynos_cal_asv = exynos_orig_sp - exynos_mod_sp;
/*
* If There is no origin speed group,
* store 1 asv group into exynos_result_of_asv.
*/
if (!exynos_orig_sp) {
pr_info("EXYNOS4X12: No Origin speed Group\n");
exynos_result_of_asv = DEFAULT_ASV_GROUP;
} else {
if (exynos_cal_asv < DEFAULT_ASV_GROUP)
exynos_result_of_asv = DEFAULT_ASV_GROUP;
else
exynos_result_of_asv = exynos_cal_asv;
}
pr_info("EXYNOS4X12(SG): ORIG : %d MOD : %d RESULT : %d\n",
exynos_orig_sp, exynos_mod_sp, exynos_result_of_asv);
/* Set special flag into exynos_special_flag */
exynos_special_flag = (tmp >> LOCKING_OFFSET) & LOCKING_MASK;
if (samsung_rev() >= EXYNOS4412_REV_2_0)
exynos4x12_prime_pre_set_abb();
else
exynos4x12_pre_set_abb();
return -EEXIST;
}
/* Set special flag into exynos_special_flag */
exynos_special_flag = (tmp >> LOCKING_OFFSET) & LOCKING_MASK;
asv_info->get_ids = exynos4x12_get_ids;
asv_info->get_hpm = exynos4x12_get_hpm;
asv_info->store_result = exynos4x12_asv_store_result;
return 0;
}

View File

@@ -0,0 +1,181 @@
/* linux/arch/arm/mach-exynos/asv-exynos.c
*
* Copyright (c) 2012 Samsung Electronics Co., Ltd.
* http://www.samsung.com/
*
* EXYNOS5 - ASV(Adoptive Support Voltage) driver
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/init.h>
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/err.h>
#include <linux/io.h>
#include <linux/slab.h>
#include <plat/cpu.h>
#include <mach/map.h>
#include <mach/asv-exynos.h>
static LIST_HEAD(asv_list);
static DEFINE_MUTEX(asv_mutex);
void add_asv_member(struct asv_info *exynos_asv_info)
{
mutex_lock(&asv_mutex);
list_add_tail(&exynos_asv_info->node, &asv_list);
mutex_unlock(&asv_mutex);
}
struct asv_info *asv_get(enum asv_type_id exynos_asv_type_id)
{
struct asv_info *match_asv_info;
list_for_each_entry(match_asv_info, &asv_list, node)
if (exynos_asv_type_id == match_asv_info->asv_type)
return match_asv_info;
return 0;
}
unsigned int get_match_volt(enum asv_type_id target_type, unsigned int target_freq)
{
struct asv_info *match_asv_info = asv_get(target_type);
unsigned int target_dvfs_level;
unsigned int i;
if (!match_asv_info) {
pr_info("EXYNOS ASV: failed to get_match_volt(type: %d)\n", target_type);
return 0;
}
target_dvfs_level = match_asv_info->dvfs_level_nr;
for (i = 0; i < target_dvfs_level; i++) {
if (match_asv_info->asv_volt[i].asv_freq == target_freq)
return match_asv_info->asv_volt[i].asv_value;
}
/* If there is no matched freq, return max supplied voltage */
return match_asv_info->max_volt_value;
}
unsigned int get_match_abb(enum asv_type_id target_type, unsigned int target_freq)
{
struct asv_info *match_asv_info = asv_get(target_type);
unsigned int target_dvfs_level;
unsigned int i;
if (!match_asv_info) {
pr_info("EXYNOS ASV: failed to get_match_abb(type: %d)\n", target_type);
return 0;
}
target_dvfs_level = match_asv_info->dvfs_level_nr;
if (!match_asv_info->asv_abb) {
pr_info("EXYNOS ASV: request for nonexist asv type(type: %d)\n", target_type);
return 0;
}
for (i = 0; i < target_dvfs_level; i++) {
if (match_asv_info->asv_abb[i].asv_freq == target_freq)
return match_asv_info->asv_abb[i].asv_value;
}
/* If there is no matched freq, return default BB value */
return ABB_X100;
}
unsigned int set_match_abb(enum asv_type_id target_type, unsigned int target_abb)
{
struct asv_info *match_asv_info = asv_get(target_type);
if (!match_asv_info) {
pr_info("EXYNOS ASV: failed to set_match_abb(type: %d)\n", target_type);
return 0;
}
if (!match_asv_info->abb_info) {
pr_info("EXYNOS ASV: request for nonexist abb(type: %d)\n", target_type);
return 0;
}
match_asv_info->abb_info->target_abb = target_abb;
match_asv_info->abb_info->set_target_abb(match_asv_info);
return 0;
}
static void set_asv_info(struct asv_common *exynos_asv_common, bool show_volt)
{
struct asv_info *exynos_asv_info;
unsigned int match_grp_nr;
list_for_each_entry(exynos_asv_info, &asv_list, node) {
match_grp_nr = exynos_asv_info->ops->get_asv_group(exynos_asv_common);
exynos_asv_info->result_asv_grp = match_grp_nr;
pr_info("%s ASV group is %d\n", exynos_asv_info->name,
exynos_asv_info->result_asv_grp);
exynos_asv_info->ops->set_asv_info(exynos_asv_info, show_volt);
/* If need to set abb, call abb set function */
if (exynos_asv_info->abb_info)
exynos_asv_info->abb_info->set_target_abb(exynos_asv_info);
}
}
static int __init asv_init(void)
{
struct asv_common *exynos_asv_common;
int ret;
exynos_asv_common = kzalloc(sizeof(struct asv_common), GFP_KERNEL);
if (!exynos_asv_common) {
pr_err("ASV : Allocation failed\n");
goto out1;
}
/* Define init function for each SoC types */
if (soc_is_exynos5410())
ret = exynos5410_init_asv(exynos_asv_common);
else {
pr_err("ASV : Unknown SoC type\n");
goto out2;
}
if (ret) {
pr_err("ASV : asv initialize failed\n");
goto out2;
}
/* If it is need to initialize, run init function */
if (exynos_asv_common->init) {
if (exynos_asv_common->init()) {
pr_err("ASV : Can not run init functioin\n");
goto out2;
}
}
/* Regist ASV member for each SoC */
if (exynos_asv_common->regist_asv_member) {
ret = exynos_asv_common->regist_asv_member();
} else {
pr_err("ASV : There is no regist_asv_member function\n");
goto out2;
}
set_asv_info(exynos_asv_common, false);
out2:
kfree(exynos_asv_common);
out1:
return -EINVAL;
}
arch_initcall_sync(asv_init);

View File

@@ -0,0 +1,676 @@
/* linux/arch/arm/mach-exynos/asv-exynos5410.c
*
* Copyright (c) 2012 Samsung Electronics Co., Ltd.
* http://www.samsung.com/
*
* EXYNOS5410 - ASV(Adoptive Support Voltage) driver
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/init.h>
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/err.h>
#include <linux/clk.h>
#include <linux/io.h>
#include <linux/slab.h>
#include <mach/asv-exynos.h>
#include <mach/asv-exynos5410.h>
#include <mach/map.h>
#include <mach/regs-pmu.h>
#include <plat/cpu.h>
#define CHIP_ID3_REG (S5P_VA_CHIPID + 0x04)
#define EXYNOS5410_IDS_OFFSET (24)
#define EXYNOS5410_IDS_MASK (0xFF)
#define EXYNOS5410_USESG_OFFSET (3)
#define EXYNOS5410_USESG_MASK (0x01)
#define EXYNOS5410_SG_OFFSET (0)
#define EXYNOS5410_SG_MASK (0x07)
#define EXYNOS5410_TABLE_OFFSET (8)
#define EXYNOS5410_TABLE_MASK (0x03)
#define EXYNOS5410_SG_A_OFFSET (17)
#define EXYNOS5410_SG_A_MASK (0x0F)
#define EXYNOS5410_SG_B_OFFSET (21)
#define EXYNOS5410_SG_B_MASK (0x03)
#define EXYNOS5410_SG_BSIGN_OFFSET (23)
#define EXYNOS5410_SG_BSIGN_MASK (0x01)
#define CHIP_ID4_REG (S5P_VA_CHIPID + 0x1C)
#define EXYNOS5410_TMCB_OFFSET (0)
#define EXYNOS5410_TMCB_MASK (0x7F)
#define EXYNOS5410_EGLLOCK_UP_OFFSET (8)
#define EXYNOS5410_EGLLOCK_UP_MASK (0x03)
#define EXYNOS5410_EGLLOCK_DN_OFFSET (10)
#define EXYNOS5410_EGLLOCK_DN_MASK (0x03)
#define EXYNOS5410_KFCLOCK_UP_OFFSET (12)
#define EXYNOS5410_KFCLOCK_UP_MASK (0x03)
#define EXYNOS5410_KFCLOCK_DN_OFFSET (14)
#define EXYNOS5410_KFCLOCK_DN_MASK (0x03)
#define EXYNOS5410_INTLOCK_UP_OFFSET (16)
#define EXYNOS5410_INTLOCK_UP_MASK (0x03)
#define EXYNOS5410_INTLOCK_DN_OFFSET (18)
#define EXYNOS5410_INTLOCK_DN_MASK (0x03)
#define EXYNOS5410_MIFLOCK_UP_OFFSET (20)
#define EXYNOS5410_MIFLOCK_UP_MASK (0x03)
#define EXYNOS5410_MIFLOCK_DN_OFFSET (22)
#define EXYNOS5410_MIFLOCK_DN_MASK (0x03)
#define EXYNOS5410_G3DLOCK_UP_OFFSET (24)
#define EXYNOS5410_G3DLOCK_UP_MASK (0x03)
#define EXYNOS5410_G3DLOCK_DN_OFFSET (26)
#define EXYNOS5410_G3DLOCK_DN_MASK (0x03)
/* Following value use with *10000 */
#define EXYNOS5410_TMCB_CHIPER 10000
#define EXYNOS5410_MUL_VAL 9225
#define EXYNOS5410_MINUS_VAL 145520
#define LOT_ID_REG (S5P_VA_CHIPID + 0x14)
#define LOT_ID_LEN (5)
#define BASE_VOLTAGE_OFFSET 1000000
enum table_version {
ASV_TABLE_VER0,
ASV_TABLE_VER1,
ASV_TABLE_VER2,
ASV_TABLE_VER3,
};
enum volt_offset {
VOLT_OFFSET_0MV,
VOLT_OFFSET_25MV,
VOLT_OFFSET_50MV,
VOLT_OFFSET_75MV,
};
bool is_special_lot;
bool is_speedgroup;
unsigned special_lot_group;
enum table_version asv_table_version;
enum volt_offset asv_volt_offset[5][2];
static const char *special_lot_list[] = {
"NZXK8",
"NZXKR",
"NZXT6",
};
unsigned int exynos5410_add_volt_offset(unsigned int voltage, enum volt_offset offset)
{
switch (offset) {
case VOLT_OFFSET_0MV:
break;
case VOLT_OFFSET_25MV:
voltage += 25000;
break;
case VOLT_OFFSET_50MV:
voltage += 50000;
break;
case VOLT_OFFSET_75MV:
voltage += 75000;
break;
}
return voltage;
}
unsigned int exynos5410_apply_volt_offset(unsigned int voltage, enum asv_type_id target_type)
{
if (!is_speedgroup)
return voltage;
if (voltage > BASE_VOLTAGE_OFFSET)
voltage = exynos5410_add_volt_offset(voltage, asv_volt_offset[target_type][0]);
else
voltage = exynos5410_add_volt_offset(voltage, asv_volt_offset[target_type][1]);
return voltage;
}
void exynos5410_set_abb(struct asv_info *asv_inform)
{
void __iomem *target_reg;
unsigned int target_value;
switch (asv_inform->asv_type) {
case ID_ARM:
case ID_KFC:
target_reg = EXYNOS5410_BB_CON0;
target_value = arm_asv_abb_info[asv_inform->result_asv_grp];
break;
case ID_INT_MIF_L0:
case ID_INT_MIF_L1:
case ID_INT_MIF_L2:
case ID_INT_MIF_L3:
case ID_MIF:
target_reg = EXYNOS5410_BB_CON1;
target_value = int_asv_abb_info[asv_inform->result_asv_grp];
break;
default:
return;
}
set_abb(target_reg, target_value);
}
static unsigned int exynos5410_get_asv_group_arm(struct asv_common *asv_comm)
{
unsigned int i;
struct asv_info *target_asv_info = asv_get(ID_ARM);
/* If sample is from special lot, must apply ASV group 0 */
if (is_special_lot)
return special_lot_group;
for (i = 0; i < target_asv_info->asv_group_nr; i++) {
if (refer_use_table_get_asv[0][i] &&
asv_comm->ids_value <= refer_table_get_asv[0][i])
return i;
if (refer_use_table_get_asv[1][i] &&
asv_comm->hpm_value <= refer_table_get_asv[1][i])
return i;
}
return 0;
}
static void exynos5410_set_asv_info_arm(struct asv_info *asv_inform, bool show_value)
{
unsigned int i;
unsigned int target_asv_grp_nr = asv_inform->result_asv_grp;
exynos5410_set_abb(asv_inform);
asv_inform->asv_volt = kmalloc((sizeof(struct asv_freq_table) * asv_inform->dvfs_level_nr), GFP_KERNEL);
asv_inform->asv_abb = kmalloc((sizeof(struct asv_freq_table) * asv_inform->dvfs_level_nr), GFP_KERNEL);
for (i = 0; i < asv_inform->dvfs_level_nr; i++) {
asv_inform->asv_volt[i].asv_freq = arm_asv_volt_info[i][0];
asv_inform->asv_volt[i].asv_value =
exynos5410_apply_volt_offset(arm_asv_volt_info[i][target_asv_grp_nr + 1], ID_ARM);
}
if (show_value) {
for (i = 0; i < asv_inform->dvfs_level_nr; i++)
pr_info("%s LV%d freq : %d volt : %d abb : %d\n",
asv_inform->name, i,
asv_inform->asv_volt[i].asv_freq,
asv_inform->asv_volt[i].asv_value,
asv_inform->asv_abb[i].asv_value);
}
}
struct asv_ops exynos5410_asv_ops_arm = {
.get_asv_group = exynos5410_get_asv_group_arm,
.set_asv_info = exynos5410_set_asv_info_arm,
};
static unsigned int exynos5410_get_asv_group_kfc(struct asv_common *asv_comm)
{
unsigned int i;
struct asv_info *target_asv_info = asv_get(ID_KFC);
/* If sample is from special lot, must apply ASV group 0 */
if (is_special_lot)
return special_lot_group;
for (i = 0; i < target_asv_info->asv_group_nr; i++) {
if (refer_use_table_get_asv[0][i] &&
asv_comm->ids_value <= refer_table_get_asv[0][i])
return i;
if (refer_use_table_get_asv[1][i] &&
asv_comm->hpm_value <= refer_table_get_asv[1][i])
return i;
}
return 0;
}
static void exynos5410_set_asv_info_kfc(struct asv_info *asv_inform, bool show_value)
{
unsigned int i;
unsigned int target_asv_grp_nr = asv_inform->result_asv_grp;
asv_inform->asv_volt = kmalloc((sizeof(struct asv_freq_table) * asv_inform->dvfs_level_nr), GFP_KERNEL);
asv_inform->asv_abb = kmalloc((sizeof(struct asv_freq_table) * asv_inform->dvfs_level_nr), GFP_KERNEL);
for (i = 0; i < asv_inform->dvfs_level_nr; i++) {
asv_inform->asv_volt[i].asv_freq = kfc_asv_volt_info[i][0];
asv_inform->asv_volt[i].asv_value =
exynos5410_apply_volt_offset(kfc_asv_volt_info[i][target_asv_grp_nr + 1], ID_KFC);
}
if (show_value) {
for (i = 0; i < asv_inform->dvfs_level_nr; i++)
pr_info("%s LV%d freq : %d volt : %d\n",
asv_inform->name, i,
asv_inform->asv_volt[i].asv_freq,
asv_inform->asv_volt[i].asv_value);
}
}
struct asv_ops exynos5410_asv_ops_kfc = {
.get_asv_group = exynos5410_get_asv_group_kfc,
.set_asv_info = exynos5410_set_asv_info_kfc,
};
static unsigned int exynos5410_get_asv_group_int(struct asv_common *asv_comm)
{
unsigned int i;
struct asv_info *target_asv_info = asv_get(ID_INT_MIF_L0);
/* If sample is from special lot, must apply ASV group 0 */
if (is_special_lot)
return special_lot_group;
for (i = 0; i < target_asv_info->asv_group_nr; i++) {
if (refer_use_table_get_asv[0][i] &&
asv_comm->ids_value <= refer_table_get_asv[0][i])
return i;
if (refer_use_table_get_asv[1][i] &&
asv_comm->hpm_value <= refer_table_get_asv[1][i])
return i;
}
return 0;
}
static void exynos5410_set_asv_info_int_mif_lv0(struct asv_info *asv_inform, bool show_value)
{
unsigned int i;
unsigned int target_asv_grp_nr = asv_inform->result_asv_grp;
asv_inform->asv_volt = kmalloc((sizeof(struct asv_freq_table) * asv_inform->dvfs_level_nr), GFP_KERNEL);
asv_inform->asv_abb = kmalloc((sizeof(struct asv_freq_table) * asv_inform->dvfs_level_nr), GFP_KERNEL);
for (i = 0; i < asv_inform->dvfs_level_nr; i++) {
asv_inform->asv_volt[i].asv_freq = int_mif_lv0_asv_volt_info[i][0];
asv_inform->asv_volt[i].asv_value =
exynos5410_apply_volt_offset(int_mif_lv0_asv_volt_info[i][target_asv_grp_nr + 1], ID_INT);
}
if (show_value) {
for (i = 0; i < asv_inform->dvfs_level_nr; i++)
pr_info("%s LV%d freq : %d volt : %d\n",
asv_inform->name, i,
asv_inform->asv_volt[i].asv_freq,
asv_inform->asv_volt[i].asv_value);
}
}
struct asv_ops exynos5410_asv_ops_int_mif_lv0 = {
.get_asv_group = exynos5410_get_asv_group_int,
.set_asv_info = exynos5410_set_asv_info_int_mif_lv0,
};
static void exynos5410_set_asv_info_int_mif_lvl(struct asv_info *asv_inform, bool show_value)
{
unsigned int i;
unsigned int target_asv_grp_nr = asv_inform->result_asv_grp;
asv_inform->asv_volt = kmalloc((sizeof(struct asv_freq_table) * asv_inform->dvfs_level_nr), GFP_KERNEL);
asv_inform->asv_abb = kmalloc((sizeof(struct asv_freq_table) * asv_inform->dvfs_level_nr), GFP_KERNEL);
for (i = 0; i < asv_inform->dvfs_level_nr; i++) {
asv_inform->asv_volt[i].asv_freq = int_mif_lv1_asv_volt_info[i][0];
asv_inform->asv_volt[i].asv_value =
exynos5410_apply_volt_offset(int_mif_lv1_asv_volt_info[i][target_asv_grp_nr + 1], ID_INT);
}
if (show_value) {
for (i = 0; i < asv_inform->dvfs_level_nr; i++)
pr_info("%s LV%d freq : %d volt : %d\n",
asv_inform->name, i,
asv_inform->asv_volt[i].asv_freq,
asv_inform->asv_volt[i].asv_value);
}
}
struct asv_ops exynos5410_asv_ops_int_mif_lv1 = {
.get_asv_group = exynos5410_get_asv_group_int,
.set_asv_info = exynos5410_set_asv_info_int_mif_lvl,
};
static void exynos5410_set_asv_info_int_mif_lv2(struct asv_info *asv_inform, bool show_value)
{
unsigned int i;
unsigned int target_asv_grp_nr = asv_inform->result_asv_grp;
asv_inform->asv_volt = kmalloc((sizeof(struct asv_freq_table) * asv_inform->dvfs_level_nr), GFP_KERNEL);
asv_inform->asv_abb = kmalloc((sizeof(struct asv_freq_table) * asv_inform->dvfs_level_nr), GFP_KERNEL);
for (i = 0; i < asv_inform->dvfs_level_nr; i++) {
asv_inform->asv_volt[i].asv_freq = int_mif_lv2_asv_volt_info[i][0];
asv_inform->asv_volt[i].asv_value =
exynos5410_apply_volt_offset(int_mif_lv2_asv_volt_info[i][target_asv_grp_nr + 1], ID_INT);
}
if (show_value) {
for (i = 0; i < asv_inform->dvfs_level_nr; i++)
pr_info("%s LV%d freq : %d volt : %d\n",
asv_inform->name, i,
asv_inform->asv_volt[i].asv_freq,
asv_inform->asv_volt[i].asv_value);
}
}
struct asv_ops exynos5410_asv_ops_int_mif_lv2 = {
.get_asv_group = exynos5410_get_asv_group_int,
.set_asv_info = exynos5410_set_asv_info_int_mif_lv2,
};
static void exynos5410_set_asv_info_int_mif_lv3(struct asv_info *asv_inform, bool show_value)
{
unsigned int i;
unsigned int target_asv_grp_nr = asv_inform->result_asv_grp;
asv_inform->asv_volt = kmalloc((sizeof(struct asv_freq_table) * asv_inform->dvfs_level_nr), GFP_KERNEL);
asv_inform->asv_abb = kmalloc((sizeof(struct asv_freq_table) * asv_inform->dvfs_level_nr), GFP_KERNEL);
for (i = 0; i < asv_inform->dvfs_level_nr; i++) {
asv_inform->asv_volt[i].asv_freq = int_mif_lv3_asv_volt_info[i][0];
asv_inform->asv_volt[i].asv_value =
exynos5410_apply_volt_offset(int_mif_lv3_asv_volt_info[i][target_asv_grp_nr + 1], ID_INT);
}
if (show_value) {
for (i = 0; i < asv_inform->dvfs_level_nr; i++)
pr_info("%s LV%d freq : %d volt : %d\n",
asv_inform->name, i,
asv_inform->asv_volt[i].asv_freq,
asv_inform->asv_volt[i].asv_value);
}
}
struct asv_ops exynos5410_asv_ops_int_mif_lv3 = {
.get_asv_group = exynos5410_get_asv_group_int,
.set_asv_info = exynos5410_set_asv_info_int_mif_lv3,
};
static unsigned int exynos5410_get_asv_group_mif(struct asv_common *asv_comm)
{
unsigned int i;
struct asv_info *target_asv_info = asv_get(ID_MIF);
/* If sample is from special lot, must apply ASV group 0 */
if (is_special_lot)
return special_lot_group;
for (i = 0; i < target_asv_info->asv_group_nr; i++) {
if (refer_use_table_get_asv[0][i] &&
asv_comm->ids_value <= refer_table_get_asv[0][i])
return i;
if (refer_use_table_get_asv[1][i] &&
asv_comm->hpm_value <= refer_table_get_asv[1][i])
return i;
}
return 0;
}
static void exynos5410_set_asv_info_mif(struct asv_info *asv_inform, bool show_value)
{
unsigned int i;
unsigned int target_asv_grp_nr = asv_inform->result_asv_grp;
exynos5410_set_abb(asv_inform);
asv_inform->asv_volt = kmalloc((sizeof(struct asv_freq_table) * asv_inform->dvfs_level_nr), GFP_KERNEL);
asv_inform->asv_abb = kmalloc((sizeof(struct asv_freq_table) * asv_inform->dvfs_level_nr), GFP_KERNEL);
for (i = 0; i < asv_inform->dvfs_level_nr; i++) {
asv_inform->asv_volt[i].asv_freq = mif_asv_volt_info[i][0];
asv_inform->asv_volt[i].asv_value =
exynos5410_apply_volt_offset(mif_asv_volt_info[i][target_asv_grp_nr + 1], ID_MIF);
}
if (show_value) {
for (i = 0; i < asv_inform->dvfs_level_nr; i++)
pr_info("%s LV%d freq : %d volt : %d\n",
asv_inform->name, i,
asv_inform->asv_volt[i].asv_freq,
asv_inform->asv_volt[i].asv_value);
}
}
struct asv_ops exynos5410_asv_ops_mif = {
.get_asv_group = exynos5410_get_asv_group_mif,
.set_asv_info = exynos5410_set_asv_info_mif,
};
static unsigned int exynos5410_get_asv_group_g3d(struct asv_common *asv_comm)
{
unsigned int i;
struct asv_info *target_asv_info = asv_get(ID_G3D);
/* If sample is from special lot, must apply ASV group 0 */
if (is_special_lot)
return special_lot_group;
for (i = 0; i < target_asv_info->asv_group_nr; i++) {
if (refer_use_table_get_asv[0][i] &&
asv_comm->ids_value <= refer_table_get_asv[0][i])
return i;
if (refer_use_table_get_asv[1][i] &&
asv_comm->hpm_value <= refer_table_get_asv[1][i])
return i;
}
return 0;
}
static void exynos5410_set_asv_info_g3d(struct asv_info *asv_inform, bool show_value)
{
unsigned int i;
unsigned int target_asv_grp_nr = asv_inform->result_asv_grp;
asv_inform->asv_volt = kmalloc((sizeof(struct asv_freq_table) * asv_inform->dvfs_level_nr), GFP_KERNEL);
asv_inform->asv_abb = kmalloc((sizeof(struct asv_freq_table) * asv_inform->dvfs_level_nr), GFP_KERNEL);
for (i = 0; i < asv_inform->dvfs_level_nr; i++) {
asv_inform->asv_volt[i].asv_freq = g3d_asv_volt_info[i][0];
asv_inform->asv_volt[i].asv_value =
exynos5410_apply_volt_offset(g3d_asv_volt_info[i][target_asv_grp_nr + 1], ID_G3D);
}
if (show_value) {
for (i = 0; i < asv_inform->dvfs_level_nr; i++)
pr_info("%s LV%d freq : %d volt : %d\n",
asv_inform->name, i,
asv_inform->asv_volt[i].asv_freq,
asv_inform->asv_volt[i].asv_value);
}
}
struct asv_ops exynos5410_asv_ops_g3d = {
.get_asv_group = exynos5410_get_asv_group_g3d,
.set_asv_info = exynos5410_set_asv_info_g3d,
};
struct asv_info exynos5410_asv_member[] = {
{
.asv_type = ID_ARM,
.name = "VDD_ARM",
.ops = &exynos5410_asv_ops_arm,
.asv_group_nr = ASV_GRP_NR(ARM),
.dvfs_level_nr = DVFS_LEVEL_NR(ARM),
.max_volt_value = MAX_VOLT(ARM),
}, {
.asv_type = ID_KFC,
.name = "VDD_KFC",
.ops = &exynos5410_asv_ops_kfc,
.asv_group_nr = ASV_GRP_NR(KFC),
.dvfs_level_nr = DVFS_LEVEL_NR(KFC),
.max_volt_value = MAX_VOLT(KFC),
}, {
.asv_type = ID_INT_MIF_L0,
.name = "VDD_INT_MIF_L0",
.ops = &exynos5410_asv_ops_int_mif_lv0,
.asv_group_nr = ASV_GRP_NR(INT),
.dvfs_level_nr = DVFS_LEVEL_NR(INT),
.max_volt_value = MAX_VOLT(INT),
}, {
.asv_type = ID_MIF,
.name = "VDD_MIF",
.ops = &exynos5410_asv_ops_mif,
.asv_group_nr = ASV_GRP_NR(MIF),
.dvfs_level_nr = DVFS_LEVEL_NR(MIF),
.max_volt_value = MAX_VOLT(MIF),
}, {
.asv_type = ID_G3D,
.name = "VDD_G3D",
.ops = &exynos5410_asv_ops_g3d,
.asv_group_nr = ASV_GRP_NR(G3D),
.dvfs_level_nr = DVFS_LEVEL_NR(G3D),
.max_volt_value = MAX_VOLT(G3D),
}, {
.asv_type = ID_INT_MIF_L1,
.name = "VDD_INT_MIF_L1",
.ops = &exynos5410_asv_ops_int_mif_lv1,
.asv_group_nr = ASV_GRP_NR(INT),
.dvfs_level_nr = DVFS_LEVEL_NR(INT),
.max_volt_value = MAX_VOLT(INT),
}, {
.asv_type = ID_INT_MIF_L2,
.name = "VDD_INT_MIF_L2",
.ops = &exynos5410_asv_ops_int_mif_lv2,
.asv_group_nr = ASV_GRP_NR(INT),
.dvfs_level_nr = DVFS_LEVEL_NR(INT),
.max_volt_value = MAX_VOLT(INT),
}, {
.asv_type = ID_INT_MIF_L3,
.name = "VDD_INT_MIF_L3",
.ops = &exynos5410_asv_ops_int_mif_lv3,
.asv_group_nr = ASV_GRP_NR(INT),
.dvfs_level_nr = DVFS_LEVEL_NR(INT),
.max_volt_value = MAX_VOLT(INT),
},
};
unsigned int exynos5410_regist_asv_member(void)
{
unsigned int i;
/* Regist asv member into list */
for (i = 0; i < ARRAY_SIZE(exynos5410_asv_member); i++)
add_asv_member(&exynos5410_asv_member[i]);
return 0;
}
static void exynos5410_check_lot_id(struct asv_common *asv_info)
{
unsigned int lid_reg = 0;
unsigned int rev_lid = 0;
unsigned int i;
unsigned int tmp;
lid_reg = __raw_readl(LOT_ID_REG);
for (i = 0; i < 32; i++) {
tmp = (lid_reg >> i) & 0x1;
rev_lid += tmp << (31 - i);
}
asv_info->lot_name[0] = 'N';
lid_reg = (rev_lid >> 11) & 0x1FFFFF;
for (i = 4; i >= 1; i--) {
tmp = lid_reg % 36;
lid_reg /= 36;
asv_info->lot_name[i] = (tmp < 10) ? (tmp + '0') : ((tmp - 10) + 'A');
}
for (i = 0; i < ARRAY_SIZE(special_lot_list); i++) {
if (!strncmp(asv_info->lot_name, special_lot_list[i], LOT_ID_LEN)) {
is_special_lot = true;
goto out;
}
}
is_special_lot = false;
out:
pr_info("Exynos5410 : Lot ID is %s[%s]\n", asv_info->lot_name,
(is_special_lot ? "Special" : "Non Special"));
}
int exynos5410_init_asv(struct asv_common *asv_info)
{
struct clk *clk_chipid;
unsigned int chip_id3_value;
unsigned int chip_id4_value;
special_lot_group = 0;
is_special_lot = false;
is_speedgroup = false;
/* lot ID Check */
clk_chipid = clk_get(NULL, "chipid_apbif");
if (IS_ERR(clk_chipid)) {
pr_info("EXYNOS5410 ASV : cannot find chipid clock!\n");
return -EINVAL;
}
clk_enable(clk_chipid);
chip_id3_value = __raw_readl(CHIP_ID3_REG);
chip_id4_value = __raw_readl(CHIP_ID4_REG);
exynos5410_check_lot_id(asv_info);
if (is_special_lot)
goto set_asv_info;
if ((chip_id3_value >> EXYNOS5410_USESG_OFFSET) & EXYNOS5410_USESG_MASK) {
if (!((chip_id3_value >> EXYNOS5410_SG_BSIGN_OFFSET) & EXYNOS5410_SG_BSIGN_MASK))
special_lot_group = ((chip_id3_value >> EXYNOS5410_SG_A_OFFSET) & EXYNOS5410_SG_A_MASK)
- ((chip_id3_value >> EXYNOS5410_SG_B_OFFSET) & EXYNOS5410_SG_B_MASK);
else
special_lot_group = ((chip_id3_value >> EXYNOS5410_SG_A_OFFSET) & EXYNOS5410_SG_A_MASK)
+ ((chip_id3_value >> EXYNOS5410_SG_B_OFFSET) & EXYNOS5410_SG_B_MASK);
is_speedgroup = true;
special_lot_group++;
pr_info("Exynos5410 ASV : Use Fusing Speed Group %d\n", special_lot_group);
} else {
asv_info->hpm_value = (chip_id4_value >> EXYNOS5410_TMCB_OFFSET) & EXYNOS5410_TMCB_MASK;
asv_info->ids_value = (chip_id3_value >> EXYNOS5410_IDS_OFFSET) & EXYNOS5410_IDS_MASK;
}
if (!asv_info->hpm_value) {
is_special_lot = true;
pr_info("Exynos5410 ASV : invalid IDS value\n");
}
pr_info("EXYNOS5410 ASV : %s IDS : %d HPM : %d\n", asv_info->lot_name,
asv_info->ids_value, asv_info->hpm_value);
asv_table_version = (chip_id3_value >> EXYNOS5410_TABLE_OFFSET) & EXYNOS5410_TABLE_MASK;
asv_volt_offset[ID_ARM][0] = (chip_id4_value >> EXYNOS5410_EGLLOCK_UP_OFFSET) & EXYNOS5410_EGLLOCK_UP_MASK;
asv_volt_offset[ID_ARM][1] = (chip_id4_value >> EXYNOS5410_EGLLOCK_DN_OFFSET) & EXYNOS5410_EGLLOCK_DN_MASK;
asv_volt_offset[ID_KFC][0] = (chip_id4_value >> EXYNOS5410_KFCLOCK_UP_OFFSET) & EXYNOS5410_KFCLOCK_UP_MASK;
asv_volt_offset[ID_KFC][1] = (chip_id4_value >> EXYNOS5410_KFCLOCK_DN_OFFSET) & EXYNOS5410_KFCLOCK_DN_MASK;
asv_volt_offset[ID_INT][0] = (chip_id4_value >> EXYNOS5410_INTLOCK_UP_OFFSET) & EXYNOS5410_INTLOCK_UP_MASK;
asv_volt_offset[ID_INT][1] = (chip_id4_value >> EXYNOS5410_INTLOCK_DN_OFFSET) & EXYNOS5410_INTLOCK_DN_MASK;
asv_volt_offset[ID_G3D][0] = (chip_id4_value >> EXYNOS5410_G3DLOCK_UP_OFFSET) & EXYNOS5410_G3DLOCK_UP_MASK;
asv_volt_offset[ID_G3D][1] = (chip_id4_value >> EXYNOS5410_G3DLOCK_DN_OFFSET) & EXYNOS5410_G3DLOCK_DN_MASK;
asv_volt_offset[ID_MIF][0] = (chip_id4_value >> EXYNOS5410_MIFLOCK_UP_OFFSET) & EXYNOS5410_MIFLOCK_UP_MASK;
asv_volt_offset[ID_MIF][1] = (chip_id4_value >> EXYNOS5410_MIFLOCK_DN_OFFSET) & EXYNOS5410_MIFLOCK_DN_MASK;
set_asv_info:
clk_disable(clk_chipid);
asv_info->regist_asv_member = exynos5410_regist_asv_member;
return 0;
}

106
arch/arm/mach-exynos/asv.c Normal file
View File

@@ -0,0 +1,106 @@
/* linux/arch/arm/mach-exynos/asv.c
*
* Copyright (c) 201 Samsung Electronics Co., Ltd.
* http://www.samsung.com/
*
* EXYNOS4 - ASV(Adaptive Supply Voltage) driver
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/init.h>
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/err.h>
#include <linux/io.h>
#include <linux/slab.h>
#include <plat/cpu.h>
#include <mach/map.h>
#include <mach/regs-iem.h>
#include <mach/asv.h>
static struct samsung_asv *exynos_asv;
unsigned int exynos_result_of_asv;
unsigned int exynos_result_mif_asv;
unsigned int exynos_special_flag;
bool exynos_lot_id;
bool exynos_lot_is_nzvpu;
bool exynos_dynamic_ema;
static int __init exynos4_asv_init(void)
{
int ret = -EINVAL;
exynos_asv = kzalloc(sizeof(struct samsung_asv), GFP_KERNEL);
if (!exynos_asv)
goto out1;
if (soc_is_exynos4412() || soc_is_exynos4212()) {
ret = exynos4x12_asv_init(exynos_asv);
/*
* If return value is not zero,
* There is already value for asv group.
* So, It is not necessary to execute for getting asv group.
*/
if (ret) {
kfree(exynos_asv);
return 0;
}
} else {
pr_info("EXYNOS: There is no type for ASV\n");
goto out2;
}
if (exynos_asv->check_vdd_arm) {
if (exynos_asv->check_vdd_arm()) {
pr_info("EXYNOS: It is wrong vdd_arm\n");
goto out2;
}
}
/* Get HPM Delay value */
if (exynos_asv->get_hpm) {
if (exynos_asv->get_hpm(exynos_asv)) {
pr_info("EXYNOS: Fail to get HPM Value\n");
goto out2;
}
} else {
pr_info("EXYNOS: Fail to get HPM Value\n");
goto out2;
}
/* Get IDS ARM Value */
if (exynos_asv->get_ids) {
if (exynos_asv->get_ids(exynos_asv)) {
pr_info("EXYNOS: Fail to get IDS Value\n");
goto out2;
}
} else {
pr_info("EXYNOS: Fail to get IDS Value\n");
goto out2;
}
if (exynos_asv->store_result) {
if (exynos_asv->store_result(exynos_asv)) {
pr_info("EXYNOS: Can not success to store result\n");
goto out2;
}
} else {
pr_info("EXYNOS: No store_result function\n");
goto out2;
}
kfree(exynos_asv);
return 0;
out2:
kfree(exynos_asv);
out1:
return -EINVAL;
}
arch_initcall_sync(exynos4_asv_init);

Some files were not shown because too many files have changed in this diff Show More