Millions of people suffering from partial or complete hearing loss use variants of sign language to communicate with each other or hearing people in their everyday life. Thus, it is imperative to develop systems to assist these people by removing the barriers that affect their social inclusion. These systems should aim towards capturing sign language in an accurate way, classifying sign language to natural words and representing sign language by having avatars or synthesized videos execute the exact same moves that convey a meaning in the sign language. This chapter reviews current state-of-the-art approaches that attempt to solve sign language recognition and representation and analyzes the challenges they face. Furthermore, this chapter presents a novel AI-based solution to the problem of robust sign language capturing and representation, as well as a solution to the unavailability of annotated sign language datasets before limitations and directions for future work are discussed.