int
uvm_map(struct vm_map *map, vaddr_t *startp, vsize_t size, struct uvm_object *uobj, voff_t uoffset, vsize_t align, uvm_flag_t flags);
void
uvm_unmap(struct vm_map *map, vaddr_t start, vaddr_t end);
int
uvm_map_pageable(struct vm_map *map, vaddr_t start, vaddr_t end, bool new_pageable, int lockflags);
bool
uvm_map_checkprot(struct vm_map *map, vaddr_t start, vaddr_t end, vm_prot_t protection);
int
uvm_map_protect(struct vm_map *map, vaddr_t start, vaddr_t end, vm_prot_t new_prot, bool set_max);
int
uvm_deallocate(struct vm_map *map, vaddr_t start, vsize_t size);
struct vmspace *
uvmspace_alloc(vaddr_t min, vaddr_t max, int pageable);
void
uvmspace_exec(struct lwp *l, vaddr_t start, vaddr_t end);
struct vmspace *
uvmspace_fork(struct vmspace *vm);
void
uvmspace_free(struct vmspace *vm1);
void
uvmspace_share(struct proc *p1, struct proc *p2);
void
uvmspace_unshare(struct lwp *l);
bool
uvm_uarea_alloc(vaddr_t *uaddrp);
void
uvm_uarea_free(vaddr_t uaddr);
uvm_map() establishes a valid mapping in map
map, which must be unlocked. The new mapping has size
size, which must be a multiple of
PAGE_SIZE. The
uobj and
uoffset arguments can have four meanings. When
uobj is
NULL and
uoffset is
UVM_UNKNOWN_OFFSET,
uvm_map() does not use the machine-dependent
PMAP_PREFER function. If
uoffset is any other value, it is used as the hint to
PMAP_PREFER. When
uobj is not
NULL and
uoffset is
UVM_UNKNOWN_OFFSET,
uvm_map() finds the offset based upon the virtual address, passed as
startp. If
uoffset is any other value, we are doing a normal mapping at this offset. The start address of the map will be returned in
startp.
align specifies alignment of mapping unless
UVM_FLAG_FIXED is specified in
flags.
align must be a power of 2.
flags passed to
uvm_map() are typically created using the
UVM_MAPFLAG(
vm_prot_t prot,
vm_prot_t maxprot,
vm_inherit_t inh,
int advice,
int flags) macro, which uses the following values. The
prot and
maxprot can take are:
#define UVM_PROT_MASK 0x07 /* protection mask */
#define UVM_PROT_NONE 0x00 /* protection none */
#define UVM_PROT_ALL 0x07 /* everything */
#define UVM_PROT_READ 0x01 /* read */
#define UVM_PROT_WRITE 0x02 /* write */
#define UVM_PROT_EXEC 0x04 /* exec */
#define UVM_PROT_R 0x01 /* read */
#define UVM_PROT_W 0x02 /* write */
#define UVM_PROT_RW 0x03 /* read-write */
#define UVM_PROT_X 0x04 /* exec */
#define UVM_PROT_RX 0x05 /* read-exec */
#define UVM_PROT_WX 0x06 /* write-exec */
#define UVM_PROT_RWX 0x07 /* read-write-exec */
The values that
inh can take are:
#define UVM_INH_MASK 0x30 /* inherit mask */
#define UVM_INH_SHARE 0x00 /* "share" */
#define UVM_INH_COPY 0x10 /* "copy" */
#define UVM_INH_NONE 0x20 /* "none" */
#define UVM_INH_DONATE 0x30 /* "donate" << not used */
The values that
advice can take are:
#define UVM_ADV_NORMAL 0x0 /* 'normal' */
#define UVM_ADV_RANDOM 0x1 /* 'random' */
#define UVM_ADV_SEQUENTIAL 0x2 /* 'sequential' */
#define UVM_ADV_MASK 0x7 /* mask */
The values that
flags can take are:
#define UVM_FLAG_FIXED 0x010000 /* find space */
#define UVM_FLAG_OVERLAY 0x020000 /* establish overlay */
#define UVM_FLAG_NOMERGE 0x040000 /* don't merge map entries */
#define UVM_FLAG_COPYONW 0x080000 /* set copy_on_write flag */
#define UVM_FLAG_AMAPPAD 0x100000 /* for bss: pad amap to reduce malloc() */
#define UVM_FLAG_TRYLOCK 0x200000 /* fail if we can not lock map */
The
UVM_MAPFLAG macro arguments can be combined with an or operator. There are several special purpose macros for checking protection combinations, e.g., the
UVM_PROT_WX macro. There are also some additional macros to extract bits from the flags. The
UVM_PROTECTION,
UVM_INHERIT,
UVM_MAXPROTECTION and
UVM_ADVICE macros return the protection, inheritance, maximum protection and advice, respectively.
uvm_map() returns a standard UVM return value.
uvm_unmap() removes a valid mapping, from
start to
end, in map
map, which must be unlocked.
uvm_map_pageable() changes the pageability of the pages in the range from
start to
end in map
map to
new_pageable.
uvm_map_pageable() returns a standard UVM return value.
uvm_map_checkprot() checks the protection of the range from
start to
end in map
map against
protection. This returns either
true or
false.
uvm_map_protect() changes the protection
start to
end in map
map to
new_prot, also setting the maximum protection to the region to
new_prot if
set_max is true. This function returns a standard UVM return value.
uvm_deallocate() deallocates kernel memory in map
map from address
start to
start + size.
uvmspace_alloc() allocates and returns a new address space, with ranges from
min to
max, setting the pageability of the address space to
pageable.
uvmspace_exec() either reuses the address space of lwp
l if there are no other references to it, or creates a new one with
uvmspace_alloc(). The range of valid addresses in the address space is reset to
start through
end.
uvmspace_fork() creates and returns a new address space based upon the
vm1 address space, typically used when allocating an address space for a child process.
uvmspace_free() lowers the reference count on the address space
vm, freeing the data structures if there are no other references.
uvmspace_share() causes process
p2 to share the address space of
p1.
uvmspace_unshare() ensures that lwp
l has its own, unshared address space, by creating a new one if necessary by calling
uvmspace_fork().
uvm_uarea_alloc() allocates virtual space for a u-area (i.e., a kernel stack) and stores its virtual address in
*uaddrp. The return value is
true if the u-area is already backed by wired physical memory, otherwise
false.
uvm_uarea_free() frees a u-area allocated with
uvm_uarea_alloc(), freeing both the virtual space and any physical pages which may have been allocated to back that virtual space later.