mirror of
https://github.com/hardkernel/linux.git
synced 2026-03-24 19:40:21 +09:00
Merge tag 'drm-next-2022-10-05' of git://anongit.freedesktop.org/drm/drm
Pull drm updates from Dave Airlie:
"Lots of stuff all over, some new AMD IP support and gang submit
support. i915 has further DG2 and Meteorlake pieces, and a bunch of
i915 display refactoring. msm has a shrinker rework. There are also a
bunch of conversions to use kunit.
This has two external pieces, some MEI changes needed for future Intel
discrete GPUs. These should be acked by Greg. There is also a cross
maintainer shared tree with some backlight rework from Hans in here.
Core:
- convert selftests to kunit
- managed init for more objects
- move to idr_init_base
- rename fb and gem cma helpers to dma
- hide unregistered connectors from getconnector ioctl
- DSC passthrough aux support
- backlight handling improvements
- add dma_resv_assert_held to vmap/vunmap
edid:
- move luminance calculation to core
fbdev:
- fix aperture helper usage
fourcc:
- add more format helpers
- add DRM_FORMAT_Cxx, DRM_FORMAT_Rxx, DRM_FORMAT_Dxx
- add packed AYUV8888, XYUV8888
- add some kunit tests
ttm:
- allow bos without backing store
- rewrite placement to use intersect/compatible functions
dma-buf:
- docs update
- improve signalling when debugging
udmabuf:
- fix failure path GPF
dp:
- drop dp/mst legacy code
- atomic mst state support
- audio infoframe packing
panel:
- Samsung LTL101AL01
- B120XAN01.0
- R140NWF5 RH
- DMT028VGHMCMI-1A T
- AUO B133UAN02.1
- IVO M133NW4J-R3
- Innolux N120ACA-EA1
amdgpu:
- Gang submit support
- Mode2 reset for RDNA2
- New IP support:
DCN 3.1.4, 3.2
SMU 13.x
NBIO 7.7
GC 11.x
PSP 13.x
SDMA 6.x
GMC 11.x
- DSC passthrough support
- PSP fixes for TA support
- vangogh GFXOFF stats
- clang fixes
- gang submit CS cleanup prep work
- fix VRAM eviction issues
amdkfd:
- GC 10.3 IP ISA fixes
- fix CRIU regression
- CPU fault on COW mapping fixes
i915:
- align fw versioning with kernel practices
- add display substruct to i915 private
- add initial runtime info to driver info
- split out HDCP and backlight registers
- MEI XeHP SDV GSC support
- add per-gt sysfs defaults
- TLB invalidation improvements
- Disable PCI BAR resize on 32-bit
- GuC firmware updates and compat changes
- GuC log timestamp translation
- DG2 preemption workaround changes
- DG2 improved HDMI pixel clocks support
- PCI BAR sanity checks
- Enable DC5 on DG2
- DG2 DMC fw bumped
- ADL-S PCI ID added
- Meteorlake enablement
- Rename ggtt_view to gtt_view
- host RPS fixes
- release mmaps on rpm suspend on discrete
- clocking and dpll refactoring
- VBT definitions and parsing updates
- SKL watermark code extracted to separate file
- allow seamless M/N changes on eDP panels
- BUG_ON removal and cleanups
msm:
- DPU:
simplified VBIF configuration
cleanup CTL interfaces
- DSI:
removed unused msm_display_dsc_config struct
switch regulator calls to new API
switched to PANEL_BRIDGE for direct attached panels
- DSI_PHY: convert drivers to parent_hws
- DP: cleanup pixel_rate handling
- HDMI: turned hdmi-phy-8996 into OF clk provider
- misc dt-bindings fixes
- choose eDP as primary display if it's available
- support getting interconnects from either the mdss or the mdp5/dpu
device nodes
- gem: Shrinker + LRU re-work:
- adds a shared GEM LRU+shrinker helper and moves msm over to that
- reduce lock contention between retire and submit by avoiding the
need to acquire obj lock in retire path (and instead using resv
seeing obj's busyness in the shrinker
- fix reclaim vs submit issues
- GEM fault injection for triggering userspace error paths
- Map/unmap optimization
- Improved robustness for a6xx GPU recovery
virtio:
- improve error and edge conditions handling
- convert to use managed helpers
- stop exposing LINEAR modifier
mgag200:
- split modeset handling per model
udl:
- suspend/disconnect handling improvements
vc4:
- rework HDMI power up
- depend on PM
- better unplugging support
ast:
- resolution handling improvements
ingenic:
- add JZ4760(B) support
- avoid a modeset when sharpness property is unchanged
- use the new PM ops
it6505:
- power seq and clock updates
ssd130x:
- regmap bulk write
- use atomic helpers instead of simple helpers
via:
- rename via_drv to via_dri1, consolidate all code.
radeon:
- drop DP MST experimental support
- delayed work flush fix
- use time_after
ti-sn65dsi86:
- DP support
mediatek:
- MT8195 DP support
- drop of_gpio header
- remove unneeded result
- small DP code improvements
vkms:
- RGB565, XRGB64 and ARGB64 support
sun4i:
- tv: convert to atomic
rcar-du:
- Synopsys DW HDMI bridge DT bindings update
exynos:
- use drm_display_info.is_hdmi
- correct return of mixer_mode_valid and hdmi_mode_valid
omap:
- refcounting fix
rockchip:
- RK3568 support
- RK3399 gamma support"
* tag 'drm-next-2022-10-05' of git://anongit.freedesktop.org/drm/drm: (1374 commits)
drm/amdkfd: Fix UBSAN shift-out-of-bounds warning
drm/amdkfd: Track unified memory when switching xnack mode
drm/amdgpu: Enable sram on vcn_4_0_2
drm/amdgpu: Enable VCN DPG for GC11_0_1
drm/msm: Fix build break with recent mm tree
drm/panel: simple: Use dev_err_probe() to simplify code
drm/panel: panel-edp: Use dev_err_probe() to simplify code
drm/panel: simple: Add Multi-Inno Technology MI0800FT-9
dt-bindings: display: simple: Add Multi-Inno Technology MI0800FT-9 panel
drm/amdgpu: correct the memcpy size for ip discovery firmware
drm/amdgpu: Skip put_reset_domain if it doesn't exist
drm/amdgpu: remove switch from amdgpu_gmc_noretry_set
drm/amdgpu: Fix mc_umc_status used uninitialized warning
drm/amd/display: Prevent OTG shutdown during PSR SU
drm/amdgpu: add page retirement handling for CPU RAS
drm/amdgpu: use RAS error address convert api in mca notifier
drm/amdgpu: support to convert dedicated umc mca address
drm/amdgpu: export umc error address convert interface
drm/amdgpu: fix sdma v4 init microcode error
drm/amd/display: fix array-bounds error in dc_stream_remove_writeback()
...
This commit is contained in:
@@ -239,6 +239,7 @@
|
||||
|
||||
#define DP_DSC_SUPPORT 0x060 /* DP 1.4 */
|
||||
# define DP_DSC_DECOMPRESSION_IS_SUPPORTED (1 << 0)
|
||||
# define DP_DSC_PASSTHROUGH_IS_SUPPORTED (1 << 1)
|
||||
|
||||
#define DP_DSC_REV 0x061
|
||||
# define DP_DSC_MAJOR_MASK (0xf << 0)
|
||||
@@ -1536,6 +1537,8 @@ enum drm_dp_phy {
|
||||
#define DP_SDP_VSC_EXT_CEA 0x21 /* DP 1.4 */
|
||||
/* 0x80+ CEA-861 infoframe types */
|
||||
|
||||
#define DP_SDP_AUDIO_INFOFRAME_HB2 0x1b
|
||||
|
||||
/**
|
||||
* struct dp_sdp_header - DP secondary data packet header
|
||||
* @HB0: Secondary Data Packet ID
|
||||
|
||||
@@ -69,6 +69,8 @@ bool drm_dp_128b132b_link_training_failed(const u8 link_status[DP_LINK_STATUS_SI
|
||||
u8 drm_dp_link_rate_to_bw_code(int link_rate);
|
||||
int drm_dp_bw_code_to_link_rate(u8 link_bw);
|
||||
|
||||
const char *drm_dp_phy_name(enum drm_dp_phy dp_phy);
|
||||
|
||||
/**
|
||||
* struct drm_dp_vsc_sdp - drm DP VSC SDP
|
||||
*
|
||||
|
||||
@@ -48,20 +48,6 @@ struct drm_dp_mst_topology_ref_history {
|
||||
|
||||
struct drm_dp_mst_branch;
|
||||
|
||||
/**
|
||||
* struct drm_dp_vcpi - Virtual Channel Payload Identifier
|
||||
* @vcpi: Virtual channel ID.
|
||||
* @pbn: Payload Bandwidth Number for this channel
|
||||
* @aligned_pbn: PBN aligned with slot size
|
||||
* @num_slots: number of slots for this PBN
|
||||
*/
|
||||
struct drm_dp_vcpi {
|
||||
int vcpi;
|
||||
int pbn;
|
||||
int aligned_pbn;
|
||||
int num_slots;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct drm_dp_mst_port - MST port
|
||||
* @port_num: port number
|
||||
@@ -86,6 +72,8 @@ struct drm_dp_vcpi {
|
||||
* @next: link to next port on this branch device
|
||||
* @aux: i2c aux transport to talk to device connected to this port, protected
|
||||
* by &drm_dp_mst_topology_mgr.base.lock.
|
||||
* @passthrough_aux: parent aux to which DSC pass-through requests should be
|
||||
* sent, only set if DSC pass-through is possible.
|
||||
* @parent: branch device parent of this port
|
||||
* @vcpi: Virtual Channel Payload info for this port.
|
||||
* @connector: DRM connector this port is connected to. Protected by
|
||||
@@ -140,9 +128,9 @@ struct drm_dp_mst_port {
|
||||
*/
|
||||
struct drm_dp_mst_branch *mstb;
|
||||
struct drm_dp_aux aux; /* i2c bus for this port? */
|
||||
struct drm_dp_aux *passthrough_aux;
|
||||
struct drm_dp_mst_branch *parent;
|
||||
|
||||
struct drm_dp_vcpi vcpi;
|
||||
struct drm_connector *connector;
|
||||
struct drm_dp_mst_topology_mgr *mgr;
|
||||
|
||||
@@ -527,35 +515,104 @@ struct drm_dp_mst_topology_cbs {
|
||||
void (*poll_hpd_irq)(struct drm_dp_mst_topology_mgr *mgr);
|
||||
};
|
||||
|
||||
#define DP_MAX_PAYLOAD (sizeof(unsigned long) * 8)
|
||||
|
||||
#define DP_PAYLOAD_LOCAL 1
|
||||
#define DP_PAYLOAD_REMOTE 2
|
||||
#define DP_PAYLOAD_DELETE_LOCAL 3
|
||||
|
||||
struct drm_dp_payload {
|
||||
int payload_state;
|
||||
int start_slot;
|
||||
int num_slots;
|
||||
int vcpi;
|
||||
};
|
||||
|
||||
#define to_dp_mst_topology_state(x) container_of(x, struct drm_dp_mst_topology_state, base)
|
||||
|
||||
struct drm_dp_vcpi_allocation {
|
||||
/**
|
||||
* struct drm_dp_mst_atomic_payload - Atomic state struct for an MST payload
|
||||
*
|
||||
* The primary atomic state structure for a given MST payload. Stores information like current
|
||||
* bandwidth allocation, intended action for this payload, etc.
|
||||
*/
|
||||
struct drm_dp_mst_atomic_payload {
|
||||
/** @port: The MST port assigned to this payload */
|
||||
struct drm_dp_mst_port *port;
|
||||
int vcpi;
|
||||
|
||||
/**
|
||||
* @vc_start_slot: The time slot that this payload starts on. Because payload start slots
|
||||
* can't be determined ahead of time, the contents of this value are UNDEFINED at atomic
|
||||
* check time. This shouldn't usually matter, as the start slot should never be relevant for
|
||||
* atomic state computations.
|
||||
*
|
||||
* Since this value is determined at commit time instead of check time, this value is
|
||||
* protected by the MST helpers ensuring that async commits operating on the given topology
|
||||
* never run in parallel. In the event that a driver does need to read this value (e.g. to
|
||||
* inform hardware of the starting timeslot for a payload), the driver may either:
|
||||
*
|
||||
* * Read this field during the atomic commit after
|
||||
* drm_dp_mst_atomic_wait_for_dependencies() has been called, which will ensure the
|
||||
* previous MST states payload start slots have been copied over to the new state. Note
|
||||
* that a new start slot won't be assigned/removed from this payload until
|
||||
* drm_dp_add_payload_part1()/drm_dp_remove_payload() have been called.
|
||||
* * Acquire the MST modesetting lock, and then wait for any pending MST-related commits to
|
||||
* get committed to hardware by calling drm_crtc_commit_wait() on each of the
|
||||
* &drm_crtc_commit structs in &drm_dp_mst_topology_state.commit_deps.
|
||||
*
|
||||
* If neither of the two above solutions suffice (e.g. the driver needs to read the start
|
||||
* slot in the middle of an atomic commit without waiting for some reason), then drivers
|
||||
* should cache this value themselves after changing payloads.
|
||||
*/
|
||||
s8 vc_start_slot;
|
||||
|
||||
/** @vcpi: The Virtual Channel Payload Identifier */
|
||||
u8 vcpi;
|
||||
/**
|
||||
* @time_slots:
|
||||
* The number of timeslots allocated to this payload from the source DP Tx to
|
||||
* the immediate downstream DP Rx
|
||||
*/
|
||||
int time_slots;
|
||||
/** @pbn: The payload bandwidth for this payload */
|
||||
int pbn;
|
||||
bool dsc_enabled;
|
||||
|
||||
/** @delete: Whether or not we intend to delete this payload during this atomic commit */
|
||||
bool delete : 1;
|
||||
/** @dsc_enabled: Whether or not this payload has DSC enabled */
|
||||
bool dsc_enabled : 1;
|
||||
|
||||
/** @next: The list node for this payload */
|
||||
struct list_head next;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct drm_dp_mst_topology_state - DisplayPort MST topology atomic state
|
||||
*
|
||||
* This struct represents the atomic state of the toplevel DisplayPort MST manager
|
||||
*/
|
||||
struct drm_dp_mst_topology_state {
|
||||
/** @base: Base private state for atomic */
|
||||
struct drm_private_state base;
|
||||
struct list_head vcpis;
|
||||
|
||||
/** @mgr: The topology manager */
|
||||
struct drm_dp_mst_topology_mgr *mgr;
|
||||
|
||||
/**
|
||||
* @pending_crtc_mask: A bitmask of all CRTCs this topology state touches, drivers may
|
||||
* modify this to add additional dependencies if needed.
|
||||
*/
|
||||
u32 pending_crtc_mask;
|
||||
/**
|
||||
* @commit_deps: A list of all CRTC commits affecting this topology, this field isn't
|
||||
* populated until drm_dp_mst_atomic_wait_for_dependencies() is called.
|
||||
*/
|
||||
struct drm_crtc_commit **commit_deps;
|
||||
/** @num_commit_deps: The number of CRTC commits in @commit_deps */
|
||||
size_t num_commit_deps;
|
||||
|
||||
/** @payload_mask: A bitmask of allocated VCPIs, used for VCPI assignments */
|
||||
u32 payload_mask;
|
||||
/** @payloads: The list of payloads being created/destroyed in this state */
|
||||
struct list_head payloads;
|
||||
|
||||
/** @total_avail_slots: The total number of slots this topology can handle (63 or 64) */
|
||||
u8 total_avail_slots;
|
||||
/** @start_slot: The first usable time slot in this topology (1 or 0) */
|
||||
u8 start_slot;
|
||||
|
||||
/**
|
||||
* @pbn_div: The current PBN divisor for this topology. The driver is expected to fill this
|
||||
* out itself.
|
||||
*/
|
||||
int pbn_div;
|
||||
};
|
||||
|
||||
#define to_dp_mst_topology_mgr(x) container_of(x, struct drm_dp_mst_topology_mgr, base)
|
||||
@@ -595,14 +652,6 @@ struct drm_dp_mst_topology_mgr {
|
||||
* @max_payloads: maximum number of payloads the GPU can generate.
|
||||
*/
|
||||
int max_payloads;
|
||||
/**
|
||||
* @max_lane_count: maximum number of lanes the GPU can drive.
|
||||
*/
|
||||
int max_lane_count;
|
||||
/**
|
||||
* @max_link_rate: maximum link rate per lane GPU can output, in kHz.
|
||||
*/
|
||||
int max_link_rate;
|
||||
/**
|
||||
* @conn_base_id: DRM connector ID this mgr is connected to. Only used
|
||||
* to build the MST connector path value.
|
||||
@@ -645,6 +694,20 @@ struct drm_dp_mst_topology_mgr {
|
||||
*/
|
||||
bool payload_id_table_cleared : 1;
|
||||
|
||||
/**
|
||||
* @payload_count: The number of currently active payloads in hardware. This value is only
|
||||
* intended to be used internally by MST helpers for payload tracking, and is only safe to
|
||||
* read/write from the atomic commit (not check) context.
|
||||
*/
|
||||
u8 payload_count;
|
||||
|
||||
/**
|
||||
* @next_start_slot: The starting timeslot to use for new VC payloads. This value is used
|
||||
* internally by MST helpers for payload tracking, and is only safe to read/write from the
|
||||
* atomic commit (not check) context.
|
||||
*/
|
||||
u8 next_start_slot;
|
||||
|
||||
/**
|
||||
* @mst_primary: Pointer to the primary/first branch device.
|
||||
*/
|
||||
@@ -658,10 +721,6 @@ struct drm_dp_mst_topology_mgr {
|
||||
* @sink_count: Sink count from DEVICE_SERVICE_IRQ_VECTOR_ESI0.
|
||||
*/
|
||||
u8 sink_count;
|
||||
/**
|
||||
* @pbn_div: PBN to slots divisor.
|
||||
*/
|
||||
int pbn_div;
|
||||
|
||||
/**
|
||||
* @funcs: Atomic helper callbacks
|
||||
@@ -678,32 +737,6 @@ struct drm_dp_mst_topology_mgr {
|
||||
*/
|
||||
struct list_head tx_msg_downq;
|
||||
|
||||
/**
|
||||
* @payload_lock: Protect payload information.
|
||||
*/
|
||||
struct mutex payload_lock;
|
||||
/**
|
||||
* @proposed_vcpis: Array of pointers for the new VCPI allocation. The
|
||||
* VCPI structure itself is &drm_dp_mst_port.vcpi, and the size of
|
||||
* this array is determined by @max_payloads.
|
||||
*/
|
||||
struct drm_dp_vcpi **proposed_vcpis;
|
||||
/**
|
||||
* @payloads: Array of payloads. The size of this array is determined
|
||||
* by @max_payloads.
|
||||
*/
|
||||
struct drm_dp_payload *payloads;
|
||||
/**
|
||||
* @payload_mask: Elements of @payloads actually in use. Since
|
||||
* reallocation of active outputs isn't possible gaps can be created by
|
||||
* disabling outputs out of order compared to how they've been enabled.
|
||||
*/
|
||||
unsigned long payload_mask;
|
||||
/**
|
||||
* @vcpi_mask: Similar to @payload_mask, but for @proposed_vcpis.
|
||||
*/
|
||||
unsigned long vcpi_mask;
|
||||
|
||||
/**
|
||||
* @tx_waitq: Wait to queue stall for the tx worker.
|
||||
*/
|
||||
@@ -775,9 +808,7 @@ struct drm_dp_mst_topology_mgr {
|
||||
int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr,
|
||||
struct drm_device *dev, struct drm_dp_aux *aux,
|
||||
int max_dpcd_transaction_bytes,
|
||||
int max_payloads,
|
||||
int max_lane_count, int max_link_rate,
|
||||
int conn_base_id);
|
||||
int max_payloads, int conn_base_id);
|
||||
|
||||
void drm_dp_mst_topology_mgr_destroy(struct drm_dp_mst_topology_mgr *mgr);
|
||||
|
||||
@@ -800,28 +831,17 @@ int drm_dp_get_vc_payload_bw(const struct drm_dp_mst_topology_mgr *mgr,
|
||||
|
||||
int drm_dp_calc_pbn_mode(int clock, int bpp, bool dsc);
|
||||
|
||||
bool drm_dp_mst_allocate_vcpi(struct drm_dp_mst_topology_mgr *mgr,
|
||||
struct drm_dp_mst_port *port, int pbn, int slots);
|
||||
|
||||
int drm_dp_mst_get_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port);
|
||||
|
||||
|
||||
void drm_dp_mst_reset_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port);
|
||||
|
||||
void drm_dp_mst_update_slots(struct drm_dp_mst_topology_state *mst_state, uint8_t link_encoding_cap);
|
||||
|
||||
void drm_dp_mst_deallocate_vcpi(struct drm_dp_mst_topology_mgr *mgr,
|
||||
struct drm_dp_mst_port *port);
|
||||
|
||||
|
||||
int drm_dp_find_vcpi_slots(struct drm_dp_mst_topology_mgr *mgr,
|
||||
int pbn);
|
||||
|
||||
|
||||
int drm_dp_update_payload_part1(struct drm_dp_mst_topology_mgr *mgr, int start_slot);
|
||||
|
||||
|
||||
int drm_dp_update_payload_part2(struct drm_dp_mst_topology_mgr *mgr);
|
||||
int drm_dp_add_payload_part1(struct drm_dp_mst_topology_mgr *mgr,
|
||||
struct drm_dp_mst_topology_state *mst_state,
|
||||
struct drm_dp_mst_atomic_payload *payload);
|
||||
int drm_dp_add_payload_part2(struct drm_dp_mst_topology_mgr *mgr,
|
||||
struct drm_atomic_state *state,
|
||||
struct drm_dp_mst_atomic_payload *payload);
|
||||
void drm_dp_remove_payload(struct drm_dp_mst_topology_mgr *mgr,
|
||||
struct drm_dp_mst_topology_state *mst_state,
|
||||
struct drm_dp_mst_atomic_payload *payload);
|
||||
|
||||
int drm_dp_check_act_status(struct drm_dp_mst_topology_mgr *mgr);
|
||||
|
||||
@@ -843,36 +863,51 @@ int drm_dp_mst_connector_late_register(struct drm_connector *connector,
|
||||
void drm_dp_mst_connector_early_unregister(struct drm_connector *connector,
|
||||
struct drm_dp_mst_port *port);
|
||||
|
||||
struct drm_dp_mst_topology_state *drm_atomic_get_mst_topology_state(struct drm_atomic_state *state,
|
||||
struct drm_dp_mst_topology_mgr *mgr);
|
||||
struct drm_dp_mst_topology_state *
|
||||
drm_atomic_get_mst_topology_state(struct drm_atomic_state *state,
|
||||
struct drm_dp_mst_topology_mgr *mgr);
|
||||
struct drm_dp_mst_topology_state *
|
||||
drm_atomic_get_new_mst_topology_state(struct drm_atomic_state *state,
|
||||
struct drm_dp_mst_topology_mgr *mgr);
|
||||
struct drm_dp_mst_atomic_payload *
|
||||
drm_atomic_get_mst_payload_state(struct drm_dp_mst_topology_state *state,
|
||||
struct drm_dp_mst_port *port);
|
||||
int __must_check
|
||||
drm_dp_atomic_find_vcpi_slots(struct drm_atomic_state *state,
|
||||
drm_dp_atomic_find_time_slots(struct drm_atomic_state *state,
|
||||
struct drm_dp_mst_topology_mgr *mgr,
|
||||
struct drm_dp_mst_port *port, int pbn,
|
||||
int pbn_div);
|
||||
struct drm_dp_mst_port *port, int pbn);
|
||||
int drm_dp_mst_atomic_enable_dsc(struct drm_atomic_state *state,
|
||||
struct drm_dp_mst_port *port,
|
||||
int pbn, int pbn_div,
|
||||
bool enable);
|
||||
int pbn, bool enable);
|
||||
int __must_check
|
||||
drm_dp_mst_add_affected_dsc_crtcs(struct drm_atomic_state *state,
|
||||
struct drm_dp_mst_topology_mgr *mgr);
|
||||
int __must_check
|
||||
drm_dp_atomic_release_vcpi_slots(struct drm_atomic_state *state,
|
||||
drm_dp_atomic_release_time_slots(struct drm_atomic_state *state,
|
||||
struct drm_dp_mst_topology_mgr *mgr,
|
||||
struct drm_dp_mst_port *port);
|
||||
void drm_dp_mst_atomic_wait_for_dependencies(struct drm_atomic_state *state);
|
||||
int __must_check drm_dp_mst_atomic_setup_commit(struct drm_atomic_state *state);
|
||||
int drm_dp_send_power_updown_phy(struct drm_dp_mst_topology_mgr *mgr,
|
||||
struct drm_dp_mst_port *port, bool power_up);
|
||||
int drm_dp_send_query_stream_enc_status(struct drm_dp_mst_topology_mgr *mgr,
|
||||
struct drm_dp_mst_port *port,
|
||||
struct drm_dp_query_stream_enc_status_ack_reply *status);
|
||||
int __must_check drm_dp_mst_atomic_check(struct drm_atomic_state *state);
|
||||
int __must_check drm_dp_mst_root_conn_atomic_check(struct drm_connector_state *new_conn_state,
|
||||
struct drm_dp_mst_topology_mgr *mgr);
|
||||
|
||||
void drm_dp_mst_get_port_malloc(struct drm_dp_mst_port *port);
|
||||
void drm_dp_mst_put_port_malloc(struct drm_dp_mst_port *port);
|
||||
|
||||
struct drm_dp_aux *drm_dp_mst_dsc_aux_for_port(struct drm_dp_mst_port *port);
|
||||
|
||||
static inline struct drm_dp_mst_topology_state *
|
||||
to_drm_dp_mst_topology_state(struct drm_private_state *state)
|
||||
{
|
||||
return container_of(state, struct drm_dp_mst_topology_state, base);
|
||||
}
|
||||
|
||||
extern const struct drm_private_state_funcs drm_dp_mst_topology_state_funcs;
|
||||
|
||||
/**
|
||||
|
||||
@@ -34,12 +34,24 @@
|
||||
#include <drm/drm_atomic_state_helper.h>
|
||||
#include <drm/drm_util.h>
|
||||
|
||||
/*
|
||||
* Drivers that don't allow primary plane scaling may pass this macro in place
|
||||
* of the min/max scale parameters of the plane-state checker function.
|
||||
*
|
||||
* Due to src being in 16.16 fixed point and dest being in integer pixels,
|
||||
* 1<<16 represents no scaling.
|
||||
*/
|
||||
#define DRM_PLANE_NO_SCALING (1<<16)
|
||||
|
||||
struct drm_atomic_state;
|
||||
struct drm_private_obj;
|
||||
struct drm_private_state;
|
||||
|
||||
int drm_atomic_helper_check_modeset(struct drm_device *dev,
|
||||
struct drm_atomic_state *state);
|
||||
int
|
||||
drm_atomic_helper_check_wb_encoder_state(struct drm_encoder *encoder,
|
||||
struct drm_connector_state *conn_state);
|
||||
int drm_atomic_helper_check_plane_state(struct drm_plane_state *plane_state,
|
||||
const struct drm_crtc_state *crtc_state,
|
||||
int min_scale,
|
||||
|
||||
@@ -930,6 +930,8 @@ struct drm_bridge *devm_drm_panel_bridge_add(struct device *dev,
|
||||
struct drm_bridge *devm_drm_panel_bridge_add_typed(struct device *dev,
|
||||
struct drm_panel *panel,
|
||||
u32 connector_type);
|
||||
struct drm_bridge *drmm_panel_bridge_add(struct drm_device *drm,
|
||||
struct drm_panel *panel);
|
||||
struct drm_connector *drm_panel_bridge_connector(struct drm_bridge *bridge);
|
||||
#else
|
||||
static inline bool drm_bridge_is_panel(const struct drm_bridge *bridge)
|
||||
@@ -947,6 +949,8 @@ static inline int drm_panel_bridge_set_orientation(struct drm_connector *connect
|
||||
#if defined(CONFIG_OF) && defined(CONFIG_DRM_PANEL_BRIDGE)
|
||||
struct drm_bridge *devm_drm_of_get_bridge(struct device *dev, struct device_node *node,
|
||||
u32 port, u32 endpoint);
|
||||
struct drm_bridge *drmm_of_get_bridge(struct drm_device *drm, struct device_node *node,
|
||||
u32 port, u32 endpoint);
|
||||
#else
|
||||
static inline struct drm_bridge *devm_drm_of_get_bridge(struct device *dev,
|
||||
struct device_node *node,
|
||||
@@ -955,6 +959,14 @@ static inline struct drm_bridge *devm_drm_of_get_bridge(struct device *dev,
|
||||
{
|
||||
return ERR_PTR(-ENODEV);
|
||||
}
|
||||
|
||||
static inline struct drm_bridge *drmm_of_get_bridge(struct drm_device *drm,
|
||||
struct device_node *node,
|
||||
u32 port,
|
||||
u32 endpoint)
|
||||
{
|
||||
return ERR_PTR(-ENODEV);
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
||||
@@ -323,6 +323,22 @@ struct drm_monitor_range_info {
|
||||
u16 max_vfreq;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct drm_luminance_range_info - Panel's luminance range for
|
||||
* &drm_display_info. Calculated using data in EDID
|
||||
*
|
||||
* This struct is used to store a luminance range supported by panel
|
||||
* as calculated using data from EDID's static hdr metadata.
|
||||
*
|
||||
* @min_luminance: This is the min supported luminance value
|
||||
*
|
||||
* @max_luminance: This is the max supported luminance value
|
||||
*/
|
||||
struct drm_luminance_range_info {
|
||||
u32 min_luminance;
|
||||
u32 max_luminance;
|
||||
};
|
||||
|
||||
/**
|
||||
* enum drm_privacy_screen_status - privacy screen status
|
||||
*
|
||||
@@ -624,6 +640,11 @@ struct drm_display_info {
|
||||
*/
|
||||
struct drm_monitor_range_info monitor_range;
|
||||
|
||||
/**
|
||||
* @luminance_range: Luminance range supported by panel
|
||||
*/
|
||||
struct drm_luminance_range_info luminance_range;
|
||||
|
||||
/**
|
||||
* @mso_stream_count: eDP Multi-SST Operation (MSO) stream count from
|
||||
* the DisplayID VESA vendor block. 0 for conventional Single-Stream
|
||||
@@ -1677,6 +1698,11 @@ int drm_connector_init_with_ddc(struct drm_device *dev,
|
||||
const struct drm_connector_funcs *funcs,
|
||||
int connector_type,
|
||||
struct i2c_adapter *ddc);
|
||||
int drmm_connector_init(struct drm_device *dev,
|
||||
struct drm_connector *connector,
|
||||
const struct drm_connector_funcs *funcs,
|
||||
int connector_type,
|
||||
struct i2c_adapter *ddc);
|
||||
void drm_connector_attach_edid_property(struct drm_connector *connector);
|
||||
int drm_connector_register(struct drm_connector *connector);
|
||||
void drm_connector_unregister(struct drm_connector *connector);
|
||||
|
||||
@@ -1216,6 +1216,15 @@ int drm_crtc_init_with_planes(struct drm_device *dev,
|
||||
struct drm_plane *cursor,
|
||||
const struct drm_crtc_funcs *funcs,
|
||||
const char *name, ...);
|
||||
|
||||
__printf(6, 7)
|
||||
int drmm_crtc_init_with_planes(struct drm_device *dev,
|
||||
struct drm_crtc *crtc,
|
||||
struct drm_plane *primary,
|
||||
struct drm_plane *cursor,
|
||||
const struct drm_crtc_funcs *funcs,
|
||||
const char *name, ...);
|
||||
|
||||
void drm_crtc_cleanup(struct drm_crtc *crtc);
|
||||
|
||||
__printf(7, 8)
|
||||
|
||||
@@ -194,6 +194,12 @@ int drm_encoder_init(struct drm_device *dev,
|
||||
const struct drm_encoder_funcs *funcs,
|
||||
int encoder_type, const char *name, ...);
|
||||
|
||||
__printf(5, 6)
|
||||
int drmm_encoder_init(struct drm_device *dev,
|
||||
struct drm_encoder *encoder,
|
||||
const struct drm_encoder_funcs *funcs,
|
||||
int encoder_type, const char *name, ...);
|
||||
|
||||
__printf(6, 7)
|
||||
void *__drmm_encoder_alloc(struct drm_device *dev,
|
||||
size_t size, size_t offset,
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef __DRM_FB_CMA_HELPER_H__
|
||||
#define __DRM_FB_CMA_HELPER_H__
|
||||
#ifndef __DRM_FB_DMA_HELPER_H__
|
||||
#define __DRM_FB_DMA_HELPER_H__
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
@@ -8,14 +8,14 @@ struct drm_device;
|
||||
struct drm_framebuffer;
|
||||
struct drm_plane_state;
|
||||
|
||||
struct drm_gem_cma_object *drm_fb_cma_get_gem_obj(struct drm_framebuffer *fb,
|
||||
struct drm_gem_dma_object *drm_fb_dma_get_gem_obj(struct drm_framebuffer *fb,
|
||||
unsigned int plane);
|
||||
|
||||
dma_addr_t drm_fb_cma_get_gem_addr(struct drm_framebuffer *fb,
|
||||
dma_addr_t drm_fb_dma_get_gem_addr(struct drm_framebuffer *fb,
|
||||
struct drm_plane_state *state,
|
||||
unsigned int plane);
|
||||
|
||||
void drm_fb_cma_sync_non_coherent(struct drm_device *drm,
|
||||
void drm_fb_dma_sync_non_coherent(struct drm_device *drm,
|
||||
struct drm_plane_state *old_state,
|
||||
struct drm_plane_state *state);
|
||||
|
||||
@@ -421,13 +421,4 @@ void drm_send_event_timestamp_locked(struct drm_device *dev,
|
||||
|
||||
struct file *mock_drm_getfile(struct drm_minor *minor, unsigned int flags);
|
||||
|
||||
#ifdef CONFIG_MMU
|
||||
struct drm_vma_offset_manager;
|
||||
unsigned long drm_get_unmapped_area(struct file *file,
|
||||
unsigned long uaddr, unsigned long len,
|
||||
unsigned long pgoff, unsigned long flags,
|
||||
struct drm_vma_offset_manager *mgr);
|
||||
#endif /* CONFIG_MMU */
|
||||
|
||||
|
||||
#endif /* _DRM_FILE_H_ */
|
||||
|
||||
@@ -6,44 +6,51 @@
|
||||
#ifndef __LINUX_DRM_FORMAT_HELPER_H
|
||||
#define __LINUX_DRM_FORMAT_HELPER_H
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
struct drm_device;
|
||||
struct drm_format_info;
|
||||
struct drm_framebuffer;
|
||||
struct drm_rect;
|
||||
|
||||
struct iosys_map;
|
||||
|
||||
unsigned int drm_fb_clip_offset(unsigned int pitch, const struct drm_format_info *format,
|
||||
const struct drm_rect *clip);
|
||||
|
||||
void drm_fb_memcpy(void *dst, unsigned int dst_pitch, const void *vaddr,
|
||||
const struct drm_framebuffer *fb, const struct drm_rect *clip);
|
||||
void drm_fb_memcpy_toio(void __iomem *dst, unsigned int dst_pitch, const void *vaddr,
|
||||
const struct drm_framebuffer *fb, const struct drm_rect *clip);
|
||||
void drm_fb_swab(void *dst, unsigned int dst_pitch, const void *src,
|
||||
const struct drm_framebuffer *fb, const struct drm_rect *clip,
|
||||
bool cached);
|
||||
void drm_fb_xrgb8888_to_rgb332(void *dst, unsigned int dst_pitch, const void *vaddr,
|
||||
const struct drm_framebuffer *fb, const struct drm_rect *clip);
|
||||
void drm_fb_xrgb8888_to_rgb565(void *dst, unsigned int dst_pitch, const void *vaddr,
|
||||
const struct drm_framebuffer *fb, const struct drm_rect *clip,
|
||||
bool swab);
|
||||
void drm_fb_xrgb8888_to_rgb565_toio(void __iomem *dst, unsigned int dst_pitch,
|
||||
const void *vaddr, const struct drm_framebuffer *fb,
|
||||
const struct drm_rect *clip, bool swab);
|
||||
void drm_fb_xrgb8888_to_rgb888(void *dst, unsigned int dst_pitch, const void *src,
|
||||
const struct drm_framebuffer *fb, const struct drm_rect *clip);
|
||||
void drm_fb_xrgb8888_to_rgb888_toio(void __iomem *dst, unsigned int dst_pitch,
|
||||
const void *vaddr, const struct drm_framebuffer *fb,
|
||||
void drm_fb_memcpy(struct iosys_map *dst, const unsigned int *dst_pitch,
|
||||
const struct iosys_map *src, const struct drm_framebuffer *fb,
|
||||
const struct drm_rect *clip);
|
||||
void drm_fb_swab(struct iosys_map *dst, const unsigned int *dst_pitch,
|
||||
const struct iosys_map *src, const struct drm_framebuffer *fb,
|
||||
const struct drm_rect *clip, bool cached);
|
||||
void drm_fb_xrgb8888_to_rgb332(struct iosys_map *dst, const unsigned int *dst_pitch,
|
||||
const struct iosys_map *src, const struct drm_framebuffer *fb,
|
||||
const struct drm_rect *clip);
|
||||
void drm_fb_xrgb8888_to_rgb565(struct iosys_map *dst, const unsigned int *dst_pitch,
|
||||
const struct iosys_map *src, const struct drm_framebuffer *fb,
|
||||
const struct drm_rect *clip, bool swab);
|
||||
void drm_fb_xrgb8888_to_rgb888(struct iosys_map *dst, const unsigned int *dst_pitch,
|
||||
const struct iosys_map *src, const struct drm_framebuffer *fb,
|
||||
const struct drm_rect *clip);
|
||||
void drm_fb_xrgb8888_to_xrgb2101010(struct iosys_map *dst, const unsigned int *dst_pitch,
|
||||
const struct iosys_map *src, const struct drm_framebuffer *fb,
|
||||
const struct drm_rect *clip);
|
||||
void drm_fb_xrgb8888_to_xrgb2101010_toio(void __iomem *dst, unsigned int dst_pitch,
|
||||
const void *vaddr, const struct drm_framebuffer *fb,
|
||||
const struct drm_rect *clip);
|
||||
void drm_fb_xrgb8888_to_gray8(void *dst, unsigned int dst_pitch, const void *vaddr,
|
||||
const struct drm_framebuffer *fb, const struct drm_rect *clip);
|
||||
void drm_fb_xrgb8888_to_gray8(struct iosys_map *dst, const unsigned int *dst_pitch,
|
||||
const struct iosys_map *src, const struct drm_framebuffer *fb,
|
||||
const struct drm_rect *clip);
|
||||
|
||||
int drm_fb_blit_toio(void __iomem *dst, unsigned int dst_pitch, uint32_t dst_format,
|
||||
const void *vmap, const struct drm_framebuffer *fb,
|
||||
const struct drm_rect *rect);
|
||||
int drm_fb_blit(struct iosys_map *dst, const unsigned int *dst_pitch, uint32_t dst_format,
|
||||
const struct iosys_map *src, const struct drm_framebuffer *fb,
|
||||
const struct drm_rect *rect);
|
||||
|
||||
void drm_fb_xrgb8888_to_mono(void *dst, unsigned int dst_pitch, const void *src,
|
||||
const struct drm_framebuffer *fb, const struct drm_rect *clip);
|
||||
void drm_fb_xrgb8888_to_mono(struct iosys_map *dst, const unsigned int *dst_pitch,
|
||||
const struct iosys_map *src, const struct drm_framebuffer *fb,
|
||||
const struct drm_rect *clip);
|
||||
|
||||
size_t drm_fb_build_fourcc_list(struct drm_device *dev,
|
||||
const u32 *native_fourccs, size_t native_nfourccs,
|
||||
const u32 *extra_fourccs, size_t extra_nfourccs,
|
||||
u32 *fourccs_out, size_t nfourccs_out);
|
||||
|
||||
#endif /* __LINUX_DRM_FORMAT_HELPER_H */
|
||||
|
||||
@@ -138,6 +138,9 @@ struct drm_format_info {
|
||||
|
||||
/** @is_yuv: Is it a YUV format? */
|
||||
bool is_yuv;
|
||||
|
||||
/** @is_color_indexed: Is it a color-indexed format? */
|
||||
bool is_color_indexed;
|
||||
};
|
||||
|
||||
/**
|
||||
@@ -313,6 +316,7 @@ unsigned int drm_format_info_block_width(const struct drm_format_info *info,
|
||||
int plane);
|
||||
unsigned int drm_format_info_block_height(const struct drm_format_info *info,
|
||||
int plane);
|
||||
unsigned int drm_format_info_bpp(const struct drm_format_info *info, int plane);
|
||||
uint64_t drm_format_info_min_pitch(const struct drm_format_info *info,
|
||||
int plane, unsigned int buffer_width);
|
||||
|
||||
|
||||
@@ -154,10 +154,10 @@ struct drm_framebuffer {
|
||||
* drm_mode_fb_cmd2.
|
||||
*
|
||||
* Note that this is a linear offset and does not take into account
|
||||
* tiling or buffer laytou per @modifier. It meant to be used when the
|
||||
* actual pixel data for this framebuffer plane starts at an offset,
|
||||
* e.g. when multiple planes are allocated within the same backing
|
||||
* storage buffer object. For tiled layouts this generally means it
|
||||
* tiling or buffer layout per @modifier. It is meant to be used when
|
||||
* the actual pixel data for this framebuffer plane starts at an offset,
|
||||
* e.g. when multiple planes are allocated within the same backing
|
||||
* storage buffer object. For tiled layouts this generally means its
|
||||
* @offsets must at least be tile-size aligned, but hardware often has
|
||||
* stricter requirements.
|
||||
*
|
||||
|
||||
@@ -174,6 +174,41 @@ struct drm_gem_object_funcs {
|
||||
const struct vm_operations_struct *vm_ops;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct drm_gem_lru - A simple LRU helper
|
||||
*
|
||||
* A helper for tracking GEM objects in a given state, to aid in
|
||||
* driver's shrinker implementation. Tracks the count of pages
|
||||
* for lockless &shrinker.count_objects, and provides
|
||||
* &drm_gem_lru_scan for driver's &shrinker.scan_objects
|
||||
* implementation.
|
||||
*/
|
||||
struct drm_gem_lru {
|
||||
/**
|
||||
* @lock:
|
||||
*
|
||||
* Lock protecting movement of GEM objects between LRUs. All
|
||||
* LRUs that the object can move between should be protected
|
||||
* by the same lock.
|
||||
*/
|
||||
struct mutex *lock;
|
||||
|
||||
/**
|
||||
* @count:
|
||||
*
|
||||
* The total number of backing pages of the GEM objects in
|
||||
* this LRU.
|
||||
*/
|
||||
long count;
|
||||
|
||||
/**
|
||||
* @list:
|
||||
*
|
||||
* The LRU list.
|
||||
*/
|
||||
struct list_head list;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct drm_gem_object - GEM buffer object
|
||||
*
|
||||
@@ -217,7 +252,7 @@ struct drm_gem_object {
|
||||
*
|
||||
* SHMEM file node used as backing storage for swappable buffer objects.
|
||||
* GEM also supports driver private objects with driver-specific backing
|
||||
* storage (contiguous CMA memory, special reserved blocks). In this
|
||||
* storage (contiguous DMA memory, special reserved blocks). In this
|
||||
* case @filp is NULL.
|
||||
*/
|
||||
struct file *filp;
|
||||
@@ -312,6 +347,20 @@ struct drm_gem_object {
|
||||
*
|
||||
*/
|
||||
const struct drm_gem_object_funcs *funcs;
|
||||
|
||||
/**
|
||||
* @lru_node:
|
||||
*
|
||||
* List node in a &drm_gem_lru.
|
||||
*/
|
||||
struct list_head lru_node;
|
||||
|
||||
/**
|
||||
* @lru:
|
||||
*
|
||||
* The current LRU list that the GEM object is on.
|
||||
*/
|
||||
struct drm_gem_lru *lru;
|
||||
};
|
||||
|
||||
/**
|
||||
@@ -420,4 +469,10 @@ void drm_gem_unlock_reservations(struct drm_gem_object **objs, int count,
|
||||
int drm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev,
|
||||
u32 handle, u64 *offset);
|
||||
|
||||
void drm_gem_lru_init(struct drm_gem_lru *lru, struct mutex *lock);
|
||||
void drm_gem_lru_remove(struct drm_gem_object *obj);
|
||||
void drm_gem_lru_move_tail(struct drm_gem_lru *lru, struct drm_gem_object *obj);
|
||||
unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned nr_to_scan,
|
||||
bool (*shrink)(struct drm_gem_object *obj));
|
||||
|
||||
#endif /* __DRM_GEM_H__ */
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef __DRM_GEM_CMA_HELPER_H__
|
||||
#define __DRM_GEM_CMA_HELPER_H__
|
||||
#ifndef __DRM_GEM_DMA_HELPER_H__
|
||||
#define __DRM_GEM_DMA_HELPER_H__
|
||||
|
||||
#include <drm/drm_file.h>
|
||||
#include <drm/drm_ioctl.h>
|
||||
@@ -9,128 +9,128 @@
|
||||
struct drm_mode_create_dumb;
|
||||
|
||||
/**
|
||||
* struct drm_gem_cma_object - GEM object backed by CMA memory allocations
|
||||
* struct drm_gem_dma_object - GEM object backed by DMA memory allocations
|
||||
* @base: base GEM object
|
||||
* @paddr: physical address of the backing memory
|
||||
* @dma_addr: DMA address of the backing memory
|
||||
* @sgt: scatter/gather table for imported PRIME buffers. The table can have
|
||||
* more than one entry but they are guaranteed to have contiguous
|
||||
* DMA addresses.
|
||||
* @vaddr: kernel virtual address of the backing memory
|
||||
* @map_noncoherent: if true, the GEM object is backed by non-coherent memory
|
||||
*/
|
||||
struct drm_gem_cma_object {
|
||||
struct drm_gem_dma_object {
|
||||
struct drm_gem_object base;
|
||||
dma_addr_t paddr;
|
||||
dma_addr_t dma_addr;
|
||||
struct sg_table *sgt;
|
||||
|
||||
/* For objects with DMA memory allocated by GEM CMA */
|
||||
/* For objects with DMA memory allocated by GEM DMA */
|
||||
void *vaddr;
|
||||
|
||||
bool map_noncoherent;
|
||||
};
|
||||
|
||||
#define to_drm_gem_cma_obj(gem_obj) \
|
||||
container_of(gem_obj, struct drm_gem_cma_object, base)
|
||||
#define to_drm_gem_dma_obj(gem_obj) \
|
||||
container_of(gem_obj, struct drm_gem_dma_object, base)
|
||||
|
||||
struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm,
|
||||
struct drm_gem_dma_object *drm_gem_dma_create(struct drm_device *drm,
|
||||
size_t size);
|
||||
void drm_gem_cma_free(struct drm_gem_cma_object *cma_obj);
|
||||
void drm_gem_cma_print_info(const struct drm_gem_cma_object *cma_obj,
|
||||
void drm_gem_dma_free(struct drm_gem_dma_object *dma_obj);
|
||||
void drm_gem_dma_print_info(const struct drm_gem_dma_object *dma_obj,
|
||||
struct drm_printer *p, unsigned int indent);
|
||||
struct sg_table *drm_gem_cma_get_sg_table(struct drm_gem_cma_object *cma_obj);
|
||||
int drm_gem_cma_vmap(struct drm_gem_cma_object *cma_obj,
|
||||
struct sg_table *drm_gem_dma_get_sg_table(struct drm_gem_dma_object *dma_obj);
|
||||
int drm_gem_dma_vmap(struct drm_gem_dma_object *dma_obj,
|
||||
struct iosys_map *map);
|
||||
int drm_gem_cma_mmap(struct drm_gem_cma_object *cma_obj, struct vm_area_struct *vma);
|
||||
int drm_gem_dma_mmap(struct drm_gem_dma_object *dma_obj, struct vm_area_struct *vma);
|
||||
|
||||
extern const struct vm_operations_struct drm_gem_cma_vm_ops;
|
||||
extern const struct vm_operations_struct drm_gem_dma_vm_ops;
|
||||
|
||||
/*
|
||||
* GEM object functions
|
||||
*/
|
||||
|
||||
/**
|
||||
* drm_gem_cma_object_free - GEM object function for drm_gem_cma_free()
|
||||
* drm_gem_dma_object_free - GEM object function for drm_gem_dma_free()
|
||||
* @obj: GEM object to free
|
||||
*
|
||||
* This function wraps drm_gem_cma_free_object(). Drivers that employ the CMA helpers
|
||||
* This function wraps drm_gem_dma_free_object(). Drivers that employ the DMA helpers
|
||||
* should use it as their &drm_gem_object_funcs.free handler.
|
||||
*/
|
||||
static inline void drm_gem_cma_object_free(struct drm_gem_object *obj)
|
||||
static inline void drm_gem_dma_object_free(struct drm_gem_object *obj)
|
||||
{
|
||||
struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
|
||||
struct drm_gem_dma_object *dma_obj = to_drm_gem_dma_obj(obj);
|
||||
|
||||
drm_gem_cma_free(cma_obj);
|
||||
drm_gem_dma_free(dma_obj);
|
||||
}
|
||||
|
||||
/**
|
||||
* drm_gem_cma_object_print_info() - Print &drm_gem_cma_object info for debugfs
|
||||
* drm_gem_dma_object_print_info() - Print &drm_gem_dma_object info for debugfs
|
||||
* @p: DRM printer
|
||||
* @indent: Tab indentation level
|
||||
* @obj: GEM object
|
||||
*
|
||||
* This function wraps drm_gem_cma_print_info(). Drivers that employ the CMA helpers
|
||||
* This function wraps drm_gem_dma_print_info(). Drivers that employ the DMA helpers
|
||||
* should use this function as their &drm_gem_object_funcs.print_info handler.
|
||||
*/
|
||||
static inline void drm_gem_cma_object_print_info(struct drm_printer *p, unsigned int indent,
|
||||
static inline void drm_gem_dma_object_print_info(struct drm_printer *p, unsigned int indent,
|
||||
const struct drm_gem_object *obj)
|
||||
{
|
||||
const struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
|
||||
const struct drm_gem_dma_object *dma_obj = to_drm_gem_dma_obj(obj);
|
||||
|
||||
drm_gem_cma_print_info(cma_obj, p, indent);
|
||||
drm_gem_dma_print_info(dma_obj, p, indent);
|
||||
}
|
||||
|
||||
/**
|
||||
* drm_gem_cma_object_get_sg_table - GEM object function for drm_gem_cma_get_sg_table()
|
||||
* drm_gem_dma_object_get_sg_table - GEM object function for drm_gem_dma_get_sg_table()
|
||||
* @obj: GEM object
|
||||
*
|
||||
* This function wraps drm_gem_cma_get_sg_table(). Drivers that employ the CMA helpers should
|
||||
* This function wraps drm_gem_dma_get_sg_table(). Drivers that employ the DMA helpers should
|
||||
* use it as their &drm_gem_object_funcs.get_sg_table handler.
|
||||
*
|
||||
* Returns:
|
||||
* A pointer to the scatter/gather table of pinned pages or NULL on failure.
|
||||
*/
|
||||
static inline struct sg_table *drm_gem_cma_object_get_sg_table(struct drm_gem_object *obj)
|
||||
static inline struct sg_table *drm_gem_dma_object_get_sg_table(struct drm_gem_object *obj)
|
||||
{
|
||||
struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
|
||||
struct drm_gem_dma_object *dma_obj = to_drm_gem_dma_obj(obj);
|
||||
|
||||
return drm_gem_cma_get_sg_table(cma_obj);
|
||||
return drm_gem_dma_get_sg_table(dma_obj);
|
||||
}
|
||||
|
||||
/*
|
||||
* drm_gem_cma_object_vmap - GEM object function for drm_gem_cma_vmap()
|
||||
* drm_gem_dma_object_vmap - GEM object function for drm_gem_dma_vmap()
|
||||
* @obj: GEM object
|
||||
* @map: Returns the kernel virtual address of the CMA GEM object's backing store.
|
||||
* @map: Returns the kernel virtual address of the DMA GEM object's backing store.
|
||||
*
|
||||
* This function wraps drm_gem_cma_vmap(). Drivers that employ the CMA helpers should
|
||||
* This function wraps drm_gem_dma_vmap(). Drivers that employ the DMA helpers should
|
||||
* use it as their &drm_gem_object_funcs.vmap handler.
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success or a negative error code on failure.
|
||||
*/
|
||||
static inline int drm_gem_cma_object_vmap(struct drm_gem_object *obj,
|
||||
static inline int drm_gem_dma_object_vmap(struct drm_gem_object *obj,
|
||||
struct iosys_map *map)
|
||||
{
|
||||
struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
|
||||
struct drm_gem_dma_object *dma_obj = to_drm_gem_dma_obj(obj);
|
||||
|
||||
return drm_gem_cma_vmap(cma_obj, map);
|
||||
return drm_gem_dma_vmap(dma_obj, map);
|
||||
}
|
||||
|
||||
/**
|
||||
* drm_gem_cma_object_mmap - GEM object function for drm_gem_cma_mmap()
|
||||
* drm_gem_dma_object_mmap - GEM object function for drm_gem_dma_mmap()
|
||||
* @obj: GEM object
|
||||
* @vma: VMA for the area to be mapped
|
||||
*
|
||||
* This function wraps drm_gem_cma_mmap(). Drivers that employ the cma helpers should
|
||||
* This function wraps drm_gem_dma_mmap(). Drivers that employ the dma helpers should
|
||||
* use it as their &drm_gem_object_funcs.mmap handler.
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success or a negative error code on failure.
|
||||
*/
|
||||
static inline int drm_gem_cma_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
|
||||
static inline int drm_gem_dma_object_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
|
||||
{
|
||||
struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
|
||||
struct drm_gem_dma_object *dma_obj = to_drm_gem_dma_obj(obj);
|
||||
|
||||
return drm_gem_cma_mmap(cma_obj, vma);
|
||||
return drm_gem_dma_mmap(dma_obj, vma);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -138,57 +138,57 @@ static inline int drm_gem_cma_object_mmap(struct drm_gem_object *obj, struct vm_
|
||||
*/
|
||||
|
||||
/* create memory region for DRM framebuffer */
|
||||
int drm_gem_cma_dumb_create_internal(struct drm_file *file_priv,
|
||||
int drm_gem_dma_dumb_create_internal(struct drm_file *file_priv,
|
||||
struct drm_device *drm,
|
||||
struct drm_mode_create_dumb *args);
|
||||
|
||||
/* create memory region for DRM framebuffer */
|
||||
int drm_gem_cma_dumb_create(struct drm_file *file_priv,
|
||||
int drm_gem_dma_dumb_create(struct drm_file *file_priv,
|
||||
struct drm_device *drm,
|
||||
struct drm_mode_create_dumb *args);
|
||||
|
||||
struct drm_gem_object *
|
||||
drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
|
||||
drm_gem_dma_prime_import_sg_table(struct drm_device *dev,
|
||||
struct dma_buf_attachment *attach,
|
||||
struct sg_table *sgt);
|
||||
|
||||
/**
|
||||
* DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE - CMA GEM driver operations
|
||||
* DRM_GEM_DMA_DRIVER_OPS_WITH_DUMB_CREATE - DMA GEM driver operations
|
||||
* @dumb_create_func: callback function for .dumb_create
|
||||
*
|
||||
* This macro provides a shortcut for setting the default GEM operations in the
|
||||
* &drm_driver structure.
|
||||
*
|
||||
* This macro is a variant of DRM_GEM_CMA_DRIVER_OPS for drivers that
|
||||
* This macro is a variant of DRM_GEM_DMA_DRIVER_OPS for drivers that
|
||||
* override the default implementation of &struct rm_driver.dumb_create. Use
|
||||
* DRM_GEM_CMA_DRIVER_OPS if possible. Drivers that require a virtual address
|
||||
* DRM_GEM_DMA_DRIVER_OPS if possible. Drivers that require a virtual address
|
||||
* on imported buffers should use
|
||||
* DRM_GEM_CMA_DRIVER_OPS_VMAP_WITH_DUMB_CREATE() instead.
|
||||
* DRM_GEM_DMA_DRIVER_OPS_VMAP_WITH_DUMB_CREATE() instead.
|
||||
*/
|
||||
#define DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE(dumb_create_func) \
|
||||
#define DRM_GEM_DMA_DRIVER_OPS_WITH_DUMB_CREATE(dumb_create_func) \
|
||||
.dumb_create = (dumb_create_func), \
|
||||
.prime_handle_to_fd = drm_gem_prime_handle_to_fd, \
|
||||
.prime_fd_to_handle = drm_gem_prime_fd_to_handle, \
|
||||
.gem_prime_import_sg_table = drm_gem_cma_prime_import_sg_table, \
|
||||
.gem_prime_import_sg_table = drm_gem_dma_prime_import_sg_table, \
|
||||
.gem_prime_mmap = drm_gem_prime_mmap
|
||||
|
||||
/**
|
||||
* DRM_GEM_CMA_DRIVER_OPS - CMA GEM driver operations
|
||||
* DRM_GEM_DMA_DRIVER_OPS - DMA GEM driver operations
|
||||
*
|
||||
* This macro provides a shortcut for setting the default GEM operations in the
|
||||
* &drm_driver structure.
|
||||
*
|
||||
* Drivers that come with their own implementation of
|
||||
* &struct drm_driver.dumb_create should use
|
||||
* DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE() instead. Use
|
||||
* DRM_GEM_CMA_DRIVER_OPS if possible. Drivers that require a virtual address
|
||||
* on imported buffers should use DRM_GEM_CMA_DRIVER_OPS_VMAP instead.
|
||||
* DRM_GEM_DMA_DRIVER_OPS_WITH_DUMB_CREATE() instead. Use
|
||||
* DRM_GEM_DMA_DRIVER_OPS if possible. Drivers that require a virtual address
|
||||
* on imported buffers should use DRM_GEM_DMA_DRIVER_OPS_VMAP instead.
|
||||
*/
|
||||
#define DRM_GEM_CMA_DRIVER_OPS \
|
||||
DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE(drm_gem_cma_dumb_create)
|
||||
#define DRM_GEM_DMA_DRIVER_OPS \
|
||||
DRM_GEM_DMA_DRIVER_OPS_WITH_DUMB_CREATE(drm_gem_dma_dumb_create)
|
||||
|
||||
/**
|
||||
* DRM_GEM_CMA_DRIVER_OPS_VMAP_WITH_DUMB_CREATE - CMA GEM driver operations
|
||||
* DRM_GEM_DMA_DRIVER_OPS_VMAP_WITH_DUMB_CREATE - DMA GEM driver operations
|
||||
* ensuring a virtual address
|
||||
* on the buffer
|
||||
* @dumb_create_func: callback function for .dumb_create
|
||||
@@ -197,21 +197,21 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
|
||||
* &drm_driver structure for drivers that need the virtual address also on
|
||||
* imported buffers.
|
||||
*
|
||||
* This macro is a variant of DRM_GEM_CMA_DRIVER_OPS_VMAP for drivers that
|
||||
* This macro is a variant of DRM_GEM_DMA_DRIVER_OPS_VMAP for drivers that
|
||||
* override the default implementation of &struct drm_driver.dumb_create. Use
|
||||
* DRM_GEM_CMA_DRIVER_OPS_VMAP if possible. Drivers that do not require a
|
||||
* DRM_GEM_DMA_DRIVER_OPS_VMAP if possible. Drivers that do not require a
|
||||
* virtual address on imported buffers should use
|
||||
* DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE() instead.
|
||||
* DRM_GEM_DMA_DRIVER_OPS_WITH_DUMB_CREATE() instead.
|
||||
*/
|
||||
#define DRM_GEM_CMA_DRIVER_OPS_VMAP_WITH_DUMB_CREATE(dumb_create_func) \
|
||||
#define DRM_GEM_DMA_DRIVER_OPS_VMAP_WITH_DUMB_CREATE(dumb_create_func) \
|
||||
.dumb_create = dumb_create_func, \
|
||||
.prime_handle_to_fd = drm_gem_prime_handle_to_fd, \
|
||||
.prime_fd_to_handle = drm_gem_prime_fd_to_handle, \
|
||||
.gem_prime_import_sg_table = drm_gem_cma_prime_import_sg_table_vmap, \
|
||||
.gem_prime_import_sg_table = drm_gem_dma_prime_import_sg_table_vmap, \
|
||||
.gem_prime_mmap = drm_gem_prime_mmap
|
||||
|
||||
/**
|
||||
* DRM_GEM_CMA_DRIVER_OPS_VMAP - CMA GEM driver operations ensuring a virtual
|
||||
* DRM_GEM_DMA_DRIVER_OPS_VMAP - DMA GEM driver operations ensuring a virtual
|
||||
* address on the buffer
|
||||
*
|
||||
* This macro provides a shortcut for setting the default GEM operations in the
|
||||
@@ -220,16 +220,16 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
|
||||
*
|
||||
* Drivers that come with their own implementation of
|
||||
* &struct drm_driver.dumb_create should use
|
||||
* DRM_GEM_CMA_DRIVER_OPS_VMAP_WITH_DUMB_CREATE() instead. Use
|
||||
* DRM_GEM_CMA_DRIVER_OPS_VMAP if possible. Drivers that do not require a
|
||||
* virtual address on imported buffers should use DRM_GEM_CMA_DRIVER_OPS
|
||||
* DRM_GEM_DMA_DRIVER_OPS_VMAP_WITH_DUMB_CREATE() instead. Use
|
||||
* DRM_GEM_DMA_DRIVER_OPS_VMAP if possible. Drivers that do not require a
|
||||
* virtual address on imported buffers should use DRM_GEM_DMA_DRIVER_OPS
|
||||
* instead.
|
||||
*/
|
||||
#define DRM_GEM_CMA_DRIVER_OPS_VMAP \
|
||||
DRM_GEM_CMA_DRIVER_OPS_VMAP_WITH_DUMB_CREATE(drm_gem_cma_dumb_create)
|
||||
#define DRM_GEM_DMA_DRIVER_OPS_VMAP \
|
||||
DRM_GEM_DMA_DRIVER_OPS_VMAP_WITH_DUMB_CREATE(drm_gem_dma_dumb_create)
|
||||
|
||||
struct drm_gem_object *
|
||||
drm_gem_cma_prime_import_sg_table_vmap(struct drm_device *drm,
|
||||
drm_gem_dma_prime_import_sg_table_vmap(struct drm_device *drm,
|
||||
struct dma_buf_attachment *attach,
|
||||
struct sg_table *sgt);
|
||||
|
||||
@@ -238,22 +238,22 @@ drm_gem_cma_prime_import_sg_table_vmap(struct drm_device *drm,
|
||||
*/
|
||||
|
||||
#ifndef CONFIG_MMU
|
||||
unsigned long drm_gem_cma_get_unmapped_area(struct file *filp,
|
||||
unsigned long drm_gem_dma_get_unmapped_area(struct file *filp,
|
||||
unsigned long addr,
|
||||
unsigned long len,
|
||||
unsigned long pgoff,
|
||||
unsigned long flags);
|
||||
#define DRM_GEM_CMA_UNMAPPED_AREA_FOPS \
|
||||
.get_unmapped_area = drm_gem_cma_get_unmapped_area,
|
||||
#define DRM_GEM_DMA_UNMAPPED_AREA_FOPS \
|
||||
.get_unmapped_area = drm_gem_dma_get_unmapped_area,
|
||||
#else
|
||||
#define DRM_GEM_CMA_UNMAPPED_AREA_FOPS
|
||||
#define DRM_GEM_DMA_UNMAPPED_AREA_FOPS
|
||||
#endif
|
||||
|
||||
/**
|
||||
* DEFINE_DRM_GEM_CMA_FOPS() - macro to generate file operations for CMA drivers
|
||||
* DEFINE_DRM_GEM_DMA_FOPS() - macro to generate file operations for DMA drivers
|
||||
* @name: name for the generated structure
|
||||
*
|
||||
* This macro autogenerates a suitable &struct file_operations for CMA based
|
||||
* This macro autogenerates a suitable &struct file_operations for DMA based
|
||||
* drivers, which can be assigned to &drm_driver.fops. Note that this structure
|
||||
* cannot be shared between drivers, because it contains a reference to the
|
||||
* current module using THIS_MODULE.
|
||||
@@ -262,7 +262,7 @@ unsigned long drm_gem_cma_get_unmapped_area(struct file *filp,
|
||||
* non-static version of this you're probably doing it wrong and will break the
|
||||
* THIS_MODULE reference by accident.
|
||||
*/
|
||||
#define DEFINE_DRM_GEM_CMA_FOPS(name) \
|
||||
#define DEFINE_DRM_GEM_DMA_FOPS(name) \
|
||||
static const struct file_operations name = {\
|
||||
.owner = THIS_MODULE,\
|
||||
.open = drm_open,\
|
||||
@@ -273,7 +273,7 @@ unsigned long drm_gem_cma_get_unmapped_area(struct file *filp,
|
||||
.read = drm_read,\
|
||||
.llseek = noop_llseek,\
|
||||
.mmap = drm_gem_mmap,\
|
||||
DRM_GEM_CMA_UNMAPPED_AREA_FOPS \
|
||||
DRM_GEM_DMA_UNMAPPED_AREA_FOPS \
|
||||
}
|
||||
|
||||
#endif /* __DRM_GEM_CMA_HELPER_H__ */
|
||||
#endif /* __DRM_GEM_DMA_HELPER_H__ */
|
||||
@@ -210,7 +210,7 @@ static inline void drm_gem_shmem_object_unpin(struct drm_gem_object *obj)
|
||||
* use it as their &drm_gem_object_funcs.get_sg_table handler.
|
||||
*
|
||||
* Returns:
|
||||
* A pointer to the scatter/gather table of pinned pages or NULL on failure.
|
||||
* A pointer to the scatter/gather table of pinned pages or error pointer on failure.
|
||||
*/
|
||||
static inline struct sg_table *drm_gem_shmem_object_get_sg_table(struct drm_gem_object *obj)
|
||||
{
|
||||
|
||||
@@ -155,6 +155,8 @@ int mipi_dbi_dev_init_with_formats(struct mipi_dbi_dev *dbidev,
|
||||
int mipi_dbi_dev_init(struct mipi_dbi_dev *dbidev,
|
||||
const struct drm_simple_display_pipe_funcs *funcs,
|
||||
const struct drm_display_mode *mode, unsigned int rotation);
|
||||
enum drm_mode_status mipi_dbi_pipe_mode_valid(struct drm_simple_display_pipe *pipe,
|
||||
const struct drm_display_mode *mode);
|
||||
void mipi_dbi_pipe_update(struct drm_simple_display_pipe *pipe,
|
||||
struct drm_plane_state *old_state);
|
||||
void mipi_dbi_enable_flush(struct mipi_dbi_dev *dbidev,
|
||||
|
||||
@@ -179,6 +179,7 @@ struct mipi_dsi_device_info {
|
||||
* @lp_rate: maximum lane frequency for low power mode in hertz, this should
|
||||
* be set to the real limits of the hardware, zero is only accepted for
|
||||
* legacy drivers
|
||||
* @dsc: panel/bridge DSC pps payload to be sent
|
||||
*/
|
||||
struct mipi_dsi_device {
|
||||
struct mipi_dsi_host *host;
|
||||
@@ -191,6 +192,7 @@ struct mipi_dsi_device {
|
||||
unsigned long mode_flags;
|
||||
unsigned long hs_rate;
|
||||
unsigned long lp_rate;
|
||||
struct drm_dsc_config *dsc;
|
||||
};
|
||||
|
||||
#define MIPI_DSI_MODULE_PREFIX "mipi-dsi:"
|
||||
@@ -322,7 +324,7 @@ int mipi_dsi_dcs_get_display_brightness(struct mipi_dsi_device *dsi,
|
||||
struct mipi_dsi_driver {
|
||||
struct device_driver driver;
|
||||
int(*probe)(struct mipi_dsi_device *dsi);
|
||||
int(*remove)(struct mipi_dsi_device *dsi);
|
||||
void (*remove)(struct mipi_dsi_device *dsi);
|
||||
void (*shutdown)(struct mipi_dsi_device *dsi);
|
||||
};
|
||||
|
||||
|
||||
@@ -138,6 +138,35 @@ enum drm_mode_status {
|
||||
.vsync_start = (vss), .vsync_end = (vse), .vtotal = (vt), \
|
||||
.vscan = (vs), .flags = (f)
|
||||
|
||||
/**
|
||||
* DRM_MODE_RES_MM - Calculates the display size from resolution and DPI
|
||||
* @res: The resolution in pixel
|
||||
* @dpi: The number of dots per inch
|
||||
*/
|
||||
#define DRM_MODE_RES_MM(res, dpi) \
|
||||
(((res) * 254ul) / ((dpi) * 10ul))
|
||||
|
||||
#define __DRM_MODE_INIT(pix, hd, vd, hd_mm, vd_mm) \
|
||||
.type = DRM_MODE_TYPE_DRIVER, .clock = (pix), \
|
||||
.hdisplay = (hd), .hsync_start = (hd), .hsync_end = (hd), \
|
||||
.htotal = (hd), .vdisplay = (vd), .vsync_start = (vd), \
|
||||
.vsync_end = (vd), .vtotal = (vd), .width_mm = (hd_mm), \
|
||||
.height_mm = (vd_mm)
|
||||
|
||||
/**
|
||||
* DRM_MODE_INIT - Initialize display mode
|
||||
* @hz: Vertical refresh rate in Hertz
|
||||
* @hd: Horizontal resolution, width
|
||||
* @vd: Vertical resolution, height
|
||||
* @hd_mm: Display width in millimeters
|
||||
* @vd_mm: Display height in millimeters
|
||||
*
|
||||
* This macro initializes a &drm_display_mode that contains information about
|
||||
* refresh rate, resolution and physical size.
|
||||
*/
|
||||
#define DRM_MODE_INIT(hz, hd, vd, hd_mm, vd_mm) \
|
||||
__DRM_MODE_INIT((hd) * (vd) * (hz) / 1000 /* kHz */, hd, vd, hd_mm, vd_mm)
|
||||
|
||||
/**
|
||||
* DRM_SIMPLE_MODE - Simple display mode
|
||||
* @hd: Horizontal resolution, width
|
||||
@@ -149,11 +178,7 @@ enum drm_mode_status {
|
||||
* resolution and physical size.
|
||||
*/
|
||||
#define DRM_SIMPLE_MODE(hd, vd, hd_mm, vd_mm) \
|
||||
.type = DRM_MODE_TYPE_DRIVER, .clock = 1 /* pass validation */, \
|
||||
.hdisplay = (hd), .hsync_start = (hd), .hsync_end = (hd), \
|
||||
.htotal = (hd), .vdisplay = (vd), .vsync_start = (vd), \
|
||||
.vsync_end = (vd), .vtotal = (vd), .width_mm = (hd_mm), \
|
||||
.height_mm = (vd_mm)
|
||||
__DRM_MODE_INIT(1 /* pass validation */, hd, vd, hd_mm, vd_mm)
|
||||
|
||||
#define CRTC_INTERLACE_HALVE_V (1 << 0) /* halve V values for interlacing */
|
||||
#define CRTC_STEREO_DOUBLE (1 << 1) /* adjust timings for stereo modes */
|
||||
|
||||
@@ -188,13 +188,6 @@ struct drm_panel {
|
||||
* Panel entry in registry.
|
||||
*/
|
||||
struct list_head list;
|
||||
|
||||
/**
|
||||
* @dsc:
|
||||
*
|
||||
* Panel DSC pps payload to be sent
|
||||
*/
|
||||
struct drm_dsc_config *dsc;
|
||||
};
|
||||
|
||||
void drm_panel_init(struct drm_panel *panel, struct device *dev,
|
||||
|
||||
@@ -631,7 +631,7 @@ struct drm_plane {
|
||||
unsigned int format_count;
|
||||
/**
|
||||
* @format_default: driver hasn't supplied supported formats for the
|
||||
* plane. Used by the drm_plane_init compatibility wrapper only.
|
||||
* plane. Used by the non-atomic driver compatibility wrapper only.
|
||||
*/
|
||||
bool format_default;
|
||||
|
||||
@@ -762,12 +762,6 @@ int drm_universal_plane_init(struct drm_device *dev,
|
||||
const uint64_t *format_modifiers,
|
||||
enum drm_plane_type type,
|
||||
const char *name, ...);
|
||||
int drm_plane_init(struct drm_device *dev,
|
||||
struct drm_plane *plane,
|
||||
uint32_t possible_crtcs,
|
||||
const struct drm_plane_funcs *funcs,
|
||||
const uint32_t *formats, unsigned int format_count,
|
||||
bool is_primary);
|
||||
void drm_plane_cleanup(struct drm_plane *plane);
|
||||
|
||||
__printf(10, 11)
|
||||
@@ -815,6 +809,50 @@ void *__drmm_universal_plane_alloc(struct drm_device *dev,
|
||||
format_count, format_modifiers, \
|
||||
plane_type, name, ##__VA_ARGS__))
|
||||
|
||||
__printf(10, 11)
|
||||
void *__drm_universal_plane_alloc(struct drm_device *dev,
|
||||
size_t size, size_t offset,
|
||||
uint32_t possible_crtcs,
|
||||
const struct drm_plane_funcs *funcs,
|
||||
const uint32_t *formats,
|
||||
unsigned int format_count,
|
||||
const uint64_t *format_modifiers,
|
||||
enum drm_plane_type plane_type,
|
||||
const char *name, ...);
|
||||
|
||||
/**
|
||||
* drm_universal_plane_alloc() - Allocate and initialize an universal plane object
|
||||
* @dev: DRM device
|
||||
* @type: the type of the struct which contains struct &drm_plane
|
||||
* @member: the name of the &drm_plane within @type
|
||||
* @possible_crtcs: bitmask of possible CRTCs
|
||||
* @funcs: callbacks for the new plane
|
||||
* @formats: array of supported formats (DRM_FORMAT\_\*)
|
||||
* @format_count: number of elements in @formats
|
||||
* @format_modifiers: array of struct drm_format modifiers terminated by
|
||||
* DRM_FORMAT_MOD_INVALID
|
||||
* @plane_type: type of plane (overlay, primary, cursor)
|
||||
* @name: printf style format string for the plane name, or NULL for default name
|
||||
*
|
||||
* Allocates and initializes a plane object of type @type. The caller
|
||||
* is responsible for releasing the allocated memory with kfree().
|
||||
*
|
||||
* Drivers are encouraged to use drmm_universal_plane_alloc() instead.
|
||||
*
|
||||
* Drivers that only support the DRM_FORMAT_MOD_LINEAR modifier support may set
|
||||
* @format_modifiers to NULL. The plane will advertise the linear modifier.
|
||||
*
|
||||
* Returns:
|
||||
* Pointer to new plane, or ERR_PTR on failure.
|
||||
*/
|
||||
#define drm_universal_plane_alloc(dev, type, member, possible_crtcs, funcs, formats, \
|
||||
format_count, format_modifiers, plane_type, name, ...) \
|
||||
((type *)__drm_universal_plane_alloc(dev, sizeof(type), \
|
||||
offsetof(type, member), \
|
||||
possible_crtcs, funcs, formats, \
|
||||
format_count, format_modifiers, \
|
||||
plane_type, name, ##__VA_ARGS__))
|
||||
|
||||
/**
|
||||
* drm_plane_index - find the index of a registered plane
|
||||
* @plane: plane to find index for
|
||||
|
||||
@@ -24,21 +24,35 @@
|
||||
#ifndef DRM_PLANE_HELPER_H
|
||||
#define DRM_PLANE_HELPER_H
|
||||
|
||||
#include <drm/drm_rect.h>
|
||||
#include <drm/drm_crtc.h>
|
||||
#include <drm/drm_modeset_helper_vtables.h>
|
||||
#include <drm/drm_modeset_helper.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
/*
|
||||
* Drivers that don't allow primary plane scaling may pass this macro in place
|
||||
* of the min/max scale parameters of the update checker function.
|
||||
struct drm_crtc;
|
||||
struct drm_framebuffer;
|
||||
struct drm_modeset_acquire_ctx;
|
||||
struct drm_plane;
|
||||
|
||||
int drm_plane_helper_update_primary(struct drm_plane *plane, struct drm_crtc *crtc,
|
||||
struct drm_framebuffer *fb,
|
||||
int crtc_x, int crtc_y,
|
||||
unsigned int crtc_w, unsigned int crtc_h,
|
||||
uint32_t src_x, uint32_t src_y,
|
||||
uint32_t src_w, uint32_t src_h,
|
||||
struct drm_modeset_acquire_ctx *ctx);
|
||||
int drm_plane_helper_disable_primary(struct drm_plane *plane,
|
||||
struct drm_modeset_acquire_ctx *ctx);
|
||||
void drm_plane_helper_destroy(struct drm_plane *plane);
|
||||
int drm_plane_helper_atomic_check(struct drm_plane *plane, struct drm_atomic_state *state);
|
||||
|
||||
/**
|
||||
* DRM_PLANE_NON_ATOMIC_FUNCS - Default plane functions for non-atomic drivers
|
||||
*
|
||||
* Due to src being in 16.16 fixed point and dest being in integer pixels,
|
||||
* 1<<16 represents no scaling.
|
||||
* This macro initializes plane functions for non-atomic drivers to default
|
||||
* values. Non-atomic interfaces are deprecated and should not be used in new
|
||||
* drivers.
|
||||
*/
|
||||
#define DRM_PLANE_HELPER_NO_SCALING (1<<16)
|
||||
|
||||
void drm_primary_helper_destroy(struct drm_plane *plane);
|
||||
extern const struct drm_plane_funcs drm_primary_helper_funcs;
|
||||
#define DRM_PLANE_NON_ATOMIC_FUNCS \
|
||||
.update_plane = drm_plane_helper_update_primary, \
|
||||
.disable_plane = drm_plane_helper_disable_primary, \
|
||||
.destroy = drm_plane_helper_destroy
|
||||
|
||||
#endif
|
||||
|
||||
@@ -3,9 +3,10 @@
|
||||
#ifndef __DRM_PROBE_HELPER_H__
|
||||
#define __DRM_PROBE_HELPER_H__
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <drm/drm_modes.h>
|
||||
|
||||
struct drm_connector;
|
||||
struct drm_crtc;
|
||||
struct drm_device;
|
||||
struct drm_modeset_acquire_ctx;
|
||||
|
||||
@@ -26,7 +27,13 @@ void drm_kms_helper_poll_disable(struct drm_device *dev);
|
||||
void drm_kms_helper_poll_enable(struct drm_device *dev);
|
||||
bool drm_kms_helper_is_poll_worker(void);
|
||||
|
||||
enum drm_mode_status drm_crtc_helper_mode_valid_fixed(struct drm_crtc *crtc,
|
||||
const struct drm_display_mode *mode,
|
||||
const struct drm_display_mode *fixed_mode);
|
||||
|
||||
int drm_connector_helper_get_modes_from_ddc(struct drm_connector *connector);
|
||||
int drm_connector_helper_get_modes_fixed(struct drm_connector *connector,
|
||||
const struct drm_display_mode *fixed_mode);
|
||||
int drm_connector_helper_get_modes(struct drm_connector *connector);
|
||||
|
||||
#endif
|
||||
|
||||
@@ -329,10 +329,10 @@ enum drm_gpu_sched_stat {
|
||||
};
|
||||
|
||||
/**
|
||||
* struct drm_sched_backend_ops
|
||||
* struct drm_sched_backend_ops - Define the backend operations
|
||||
* called by the scheduler
|
||||
*
|
||||
* Define the backend operations called by the scheduler,
|
||||
* these functions should be implemented in driver side.
|
||||
* These functions should be implemented in the driver side.
|
||||
*/
|
||||
struct drm_sched_backend_ops {
|
||||
/**
|
||||
@@ -409,7 +409,7 @@ struct drm_sched_backend_ops {
|
||||
};
|
||||
|
||||
/**
|
||||
* struct drm_gpu_scheduler
|
||||
* struct drm_gpu_scheduler - scheduler instance-specific data
|
||||
*
|
||||
* @ops: backend operations provided by the driver.
|
||||
* @hw_submission_limit: the max size of the hardware queue.
|
||||
@@ -435,6 +435,7 @@ struct drm_sched_backend_ops {
|
||||
* @_score: score used when the driver doesn't provide one
|
||||
* @ready: marks if the underlying HW is ready to work
|
||||
* @free_guilty: A hit to time out handler to free the guilty job.
|
||||
* @dev: system &struct device
|
||||
*
|
||||
* One scheduler is implemented for each hardware ring.
|
||||
*/
|
||||
|
||||
@@ -641,6 +641,7 @@
|
||||
INTEL_VGA_DEVICE(0x4682, info), \
|
||||
INTEL_VGA_DEVICE(0x4688, info), \
|
||||
INTEL_VGA_DEVICE(0x468A, info), \
|
||||
INTEL_VGA_DEVICE(0x468B, info), \
|
||||
INTEL_VGA_DEVICE(0x4690, info), \
|
||||
INTEL_VGA_DEVICE(0x4692, info), \
|
||||
INTEL_VGA_DEVICE(0x4693, info)
|
||||
|
||||
@@ -317,93 +317,16 @@ void ttm_bo_unlock_delayed_workqueue(struct ttm_device *bdev, int resched);
|
||||
bool ttm_bo_eviction_valuable(struct ttm_buffer_object *bo,
|
||||
const struct ttm_place *place);
|
||||
|
||||
/**
|
||||
* ttm_bo_init_reserved
|
||||
*
|
||||
* @bdev: Pointer to a ttm_device struct.
|
||||
* @bo: Pointer to a ttm_buffer_object to be initialized.
|
||||
* @size: Requested size of buffer object.
|
||||
* @type: Requested type of buffer object.
|
||||
* @placement: Initial placement for buffer object.
|
||||
* @page_alignment: Data alignment in pages.
|
||||
* @ctx: TTM operation context for memory allocation.
|
||||
* @sg: Scatter-gather table.
|
||||
* @resv: Pointer to a dma_resv, or NULL to let ttm allocate one.
|
||||
* @destroy: Destroy function. Use NULL for kfree().
|
||||
*
|
||||
* This function initializes a pre-allocated struct ttm_buffer_object.
|
||||
* As this object may be part of a larger structure, this function,
|
||||
* together with the @destroy function,
|
||||
* enables driver-specific objects derived from a ttm_buffer_object.
|
||||
*
|
||||
* On successful return, the caller owns an object kref to @bo. The kref and
|
||||
* list_kref are usually set to 1, but note that in some situations, other
|
||||
* tasks may already be holding references to @bo as well.
|
||||
* Furthermore, if resv == NULL, the buffer's reservation lock will be held,
|
||||
* and it is the caller's responsibility to call ttm_bo_unreserve.
|
||||
*
|
||||
* If a failure occurs, the function will call the @destroy function, or
|
||||
* kfree() if @destroy is NULL. Thus, after a failure, dereferencing @bo is
|
||||
* illegal and will likely cause memory corruption.
|
||||
*
|
||||
* Returns
|
||||
* -ENOMEM: Out of memory.
|
||||
* -EINVAL: Invalid placement flags.
|
||||
* -ERESTARTSYS: Interrupted by signal while sleeping waiting for resources.
|
||||
*/
|
||||
|
||||
int ttm_bo_init_reserved(struct ttm_device *bdev,
|
||||
struct ttm_buffer_object *bo,
|
||||
size_t size, enum ttm_bo_type type,
|
||||
struct ttm_placement *placement,
|
||||
uint32_t page_alignment,
|
||||
struct ttm_operation_ctx *ctx,
|
||||
int ttm_bo_init_reserved(struct ttm_device *bdev, struct ttm_buffer_object *bo,
|
||||
enum ttm_bo_type type, struct ttm_placement *placement,
|
||||
uint32_t alignment, struct ttm_operation_ctx *ctx,
|
||||
struct sg_table *sg, struct dma_resv *resv,
|
||||
void (*destroy) (struct ttm_buffer_object *));
|
||||
int ttm_bo_init_validate(struct ttm_device *bdev, struct ttm_buffer_object *bo,
|
||||
enum ttm_bo_type type, struct ttm_placement *placement,
|
||||
uint32_t alignment, bool interruptible,
|
||||
struct sg_table *sg, struct dma_resv *resv,
|
||||
void (*destroy) (struct ttm_buffer_object *));
|
||||
|
||||
/**
|
||||
* ttm_bo_init
|
||||
*
|
||||
* @bdev: Pointer to a ttm_device struct.
|
||||
* @bo: Pointer to a ttm_buffer_object to be initialized.
|
||||
* @size: Requested size of buffer object.
|
||||
* @type: Requested type of buffer object.
|
||||
* @placement: Initial placement for buffer object.
|
||||
* @page_alignment: Data alignment in pages.
|
||||
* @interruptible: If needing to sleep to wait for GPU resources,
|
||||
* sleep interruptible.
|
||||
* pinned in physical memory. If this behaviour is not desired, this member
|
||||
* holds a pointer to a persistent shmem object. Typically, this would
|
||||
* point to the shmem object backing a GEM object if TTM is used to back a
|
||||
* GEM user interface.
|
||||
* @sg: Scatter-gather table.
|
||||
* @resv: Pointer to a dma_resv, or NULL to let ttm allocate one.
|
||||
* @destroy: Destroy function. Use NULL for kfree().
|
||||
*
|
||||
* This function initializes a pre-allocated struct ttm_buffer_object.
|
||||
* As this object may be part of a larger structure, this function,
|
||||
* together with the @destroy function,
|
||||
* enables driver-specific objects derived from a ttm_buffer_object.
|
||||
*
|
||||
* On successful return, the caller owns an object kref to @bo. The kref and
|
||||
* list_kref are usually set to 1, but note that in some situations, other
|
||||
* tasks may already be holding references to @bo as well.
|
||||
*
|
||||
* If a failure occurs, the function will call the @destroy function, or
|
||||
* kfree() if @destroy is NULL. Thus, after a failure, dereferencing @bo is
|
||||
* illegal and will likely cause memory corruption.
|
||||
*
|
||||
* Returns
|
||||
* -ENOMEM: Out of memory.
|
||||
* -EINVAL: Invalid placement flags.
|
||||
* -ERESTARTSYS: Interrupted by signal while sleeping waiting for resources.
|
||||
*/
|
||||
int ttm_bo_init(struct ttm_device *bdev, struct ttm_buffer_object *bo,
|
||||
size_t size, enum ttm_bo_type type,
|
||||
struct ttm_placement *placement,
|
||||
uint32_t page_alignment, bool interrubtible,
|
||||
struct sg_table *sg, struct dma_resv *resv,
|
||||
void (*destroy) (struct ttm_buffer_object *));
|
||||
|
||||
/**
|
||||
* ttm_kmap_obj_virtual
|
||||
|
||||
@@ -106,7 +106,7 @@ static inline int ttm_bo_reserve(struct ttm_buffer_object *bo,
|
||||
bool interruptible, bool no_wait,
|
||||
struct ww_acquire_ctx *ticket)
|
||||
{
|
||||
int ret = 0;
|
||||
int ret;
|
||||
|
||||
if (no_wait) {
|
||||
bool success;
|
||||
|
||||
@@ -88,6 +88,38 @@ struct ttm_resource_manager_func {
|
||||
void (*free)(struct ttm_resource_manager *man,
|
||||
struct ttm_resource *res);
|
||||
|
||||
/**
|
||||
* struct ttm_resource_manager_func member intersects
|
||||
*
|
||||
* @man: Pointer to a memory type manager.
|
||||
* @res: Pointer to a struct ttm_resource to be checked.
|
||||
* @place: Placement to check against.
|
||||
* @size: Size of the check.
|
||||
*
|
||||
* Test if @res intersects with @place + @size. Used to judge if
|
||||
* evictions are valueable or not.
|
||||
*/
|
||||
bool (*intersects)(struct ttm_resource_manager *man,
|
||||
struct ttm_resource *res,
|
||||
const struct ttm_place *place,
|
||||
size_t size);
|
||||
|
||||
/**
|
||||
* struct ttm_resource_manager_func member compatible
|
||||
*
|
||||
* @man: Pointer to a memory type manager.
|
||||
* @res: Pointer to a struct ttm_resource to be checked.
|
||||
* @place: Placement to check against.
|
||||
* @size: Size of the check.
|
||||
*
|
||||
* Test if @res compatible with @place + @size. Used to check of
|
||||
* the need to move the backing store or not.
|
||||
*/
|
||||
bool (*compatible)(struct ttm_resource_manager *man,
|
||||
struct ttm_resource *res,
|
||||
const struct ttm_place *place,
|
||||
size_t size);
|
||||
|
||||
/**
|
||||
* struct ttm_resource_manager_func member debug
|
||||
*
|
||||
@@ -329,6 +361,14 @@ int ttm_resource_alloc(struct ttm_buffer_object *bo,
|
||||
const struct ttm_place *place,
|
||||
struct ttm_resource **res);
|
||||
void ttm_resource_free(struct ttm_buffer_object *bo, struct ttm_resource **res);
|
||||
bool ttm_resource_intersects(struct ttm_device *bdev,
|
||||
struct ttm_resource *res,
|
||||
const struct ttm_place *place,
|
||||
size_t size);
|
||||
bool ttm_resource_compatible(struct ttm_device *bdev,
|
||||
struct ttm_resource *res,
|
||||
const struct ttm_place *place,
|
||||
size_t size);
|
||||
bool ttm_resource_compat(struct ttm_resource *res,
|
||||
struct ttm_placement *placement);
|
||||
void ttm_resource_set_bo(struct ttm_resource *res,
|
||||
|
||||
Reference in New Issue
Block a user